Exporting a schema in Amazon Redshift creates a DDL-only dump you can version-control, share, or recreate in another cluster.
Schema dumps let you version control table definitions, rebuild environments quickly, and share reproducible DDL across teams.
Use the PostgreSQL pg_dump
utility with the --schema-only
flag because Redshift speaks the same wire protocol as PostgreSQL.
pg_dump -h redshift-cluster.abc123.us-east-1.redshift.amazonaws.com -p 5439 -U analytics -d prod --schema public --schema-only -f public_schema.sql
Repeat --schema
for each schema or omit the option to dump every schema: --schema public --schema sales
.
Pipe pg_dump
into gzip
, then stream to S3 with the AWS CLI:pg_dump ...| gzip | aws s3 cp - s3://team-backups/redshift/public_schema.sql.gz
After running the command, the output includes DDL for Customers
, Orders
, Products
, and OrderItems
. You can apply the file to another cluster with psql -f public_schema.sql
.
Run dumps from a read-only user, schedule them during off-peak hours, compress the result, and store at least three previous versions.
Yes. Omit --schema-only
and add --table
for seed tables such as Products
.Keep large fact tables out to avoid huge files.
Create a new database or schema, then execute psql -h new-cluster ... -f public_schema.sql
.
.
Yes. Serverless provides the same endpoint and port. Supply the generated database user and workgroup endpoint.
No. Redshift allows concurrent reads, so dumps run without blocking writes, but heavy catalogs may increase system load.
The file contains only DDL, so even 1,000 tables are usually under a few megabytes. Data is not included unless you omit --schema-only
.