Exporting a schema copies only the structure—tables, views, functions, and indexes—without data into a .sql file that can be version-controlled or migrated.
Exporting a schema lets teams version-control database structure, spin up testing environments quickly, migrate between servers, or share the data model with other systems like MySQL.
Use pg_dump --schema-only
(or -s
) to exclude rows.This creates a concise .sql file containing CREATE statements for every object in the chosen database or schema.
-s
or --schema-only
discards INSERT statements, while --no-owner
and --no-privileges
remove ownership and GRANT lines—helpful when restoring to another role.
Combine -n
with -s
. Example: pg_dump -s -n ecommerce -f ecommerce_schema.sql mydb
.Only objects inside the ecommerce
schema will be scripted.
Repeat -n
: pg_dump -s -n public -n audit -f core_schema.sql mydb
. Each -n
flag adds another schema pattern to include.
Pipe it to psql
: psql -d targetdb -f ecommerce_schema.sql
. Ensure the target database is empty or object names won’t collide unless you specify --clean
during dump.
Yes.Append | gzip > ecommerce_schema.sql.gz
.To restore, run gunzip -c ecommerce_schema.sql.gz | psql -d targetdb
.
Schedule nightly dumps, include Git in the storage path, store along with application migrations, and automate checksum checks to catch accidental structure drift.
Omitting dependent objects: Remember sequences, custom types, and functions are part of the schema; pg_dump includes them automatically when -s
is used.
Mixing schema and data: Avoid pg_dump -s -a
; these flags cancel each other.Use either schema-only or data-only, not both.
No. pg_dump runs in SERIALIZABLE snapshot mode, so it doesn’t block reads or writes.
Not natively. Dump to SQL, then use tools like pgloader
or manual editing to adjust datatypes before importing into MySQL.
Size depends on object count, not rows, but large function bodies or many indexes can expand the file.Compression typically reduces it 80-90%.
.
Yes. Because it skips data, it completes much quicker and uses minimal I/O.
No. You only need CONNECT and USAGE on the schema plus SELECT on all its objects.