Back up PostgreSQL by exporting data and schema with pg_dump or creating a physical copy with pg_basebackup.
Hardware failure, human error, or software bugs can corrupt data. Regular backups let you restore service quickly and meet compliance standards.
pg_dump creates logical backups (SQL or custom format) per database. pg_dumpall handles global objects. pg_basebackup copies the entire data directory for point-in-time recovery (PITR).
Run pg_dump from the OS shell. Specify the database, output file, format, and credentials. Logical backups are portable across major versions.
pg_dump -h localhost -U admin -F c -Z 9 -f /backups/ecommerce_$(date +%F).dump ecommerce_db
Use pg_basebackup as the postgres superuser. It streams WAL files and data files, making a binary copy that supports PITR.
pg_basebackup -h 127.0.0.1 -U replica -D /backups/cluster_$(date +%F) -Ft -z -P --wal-method=stream
Automate nightly logical dumps for small DBs and weekly physical backups for large clusters. Store files offsite and test restores monthly.
For pg_dump files, run pg_restore
into a clean database. For pg_basebackup, start PostgreSQL on the copied data directory or use it as a standby.
Use -Z
in pg_dump for compression and pipe output to gpg
or openssl
to encrypt. Store keys securely.
In PostgreSQL ≥9.3, pg_dump uses MVCC and takes no exclusive locks; writes continue during the dump.
Custom format (-F c) compresses data; expect 30-70% of on-disk size depending on table contents.
Yes. Use --schema=sales
repeatedly to include chosen schemas in the dump.