Use pg_dump with the -n flag to export a single PostgreSQL schema to a file for backup or migration.
Exporting a single schema lets you back up or migrate a self-contained part of the database, keeping dump files small and restores fast.
Run pg_dump
with the -n
option. This flag filters objects so that only the chosen schema is written to the output file.
At minimum supply -U
for the role, -h
for the host, and -d
for the database. PGPASSWORD
or a .pgpass file can store the password securely.
Use --schema-only
to dump DDL only, --data-only
for data only, or omit both to include everything.
Feed the file into psql
: psql -U target_user -d target_db -f sales_schema.sql
. Ensure the target schema does not already exist, or drop it first.
Compress the dump with -F c
(custom format) and enable parallelism with -j
. Always test a restore on a staging database before touching production.
Add --blobs
to include large objects (BLOBs) in the dump. Without this flag, large objects are skipped.
Create a cron job or CI pipeline that calls pg_dump
with the desired flags, pipes the output to gzip
, and moves it to durable storage such as S3.
When restoring, explicitly set SET search_path TO your_schema
or objects may be created in public
.
Always pass the correct -d
value. Listing databases with \l
in psql
beforehand prevents surprises.
Yes. Repeat the -n
flag for each schema: -n sales -n inventory
.
Only briefly. It takes a SHARE LOCK at the start of the dump to ensure a consistent snapshot, but normal reads and writes continue.
Pipe the output through openssl
or gpg
: pg_dump ... | gzip | gpg -c -o sales_schema.sql.gpg
.