Restore a dumped PostgreSQL database into a live cluster using pg_restore or psql.
Restoring brings the data, schema, roles, and other objects inside a dump file back into a running PostgreSQL instance so the database is usable again.
Use pg_restore
for custom (-Fc
) or directory (-Fd
) format dumps—it can parallelize and selectively restore objects.Use psql
when the dump is plain SQL (-Fp
).
• A valid dump file created with pg_dump
.
• PostgreSQL server running and accepting connections.
• A superuser or owner role to create/drop objects.
Create a fresh database so you start clean: createdb -U postgres galaxy_shop
. pg_restore can also create it with -C
.
Run pg_restore -U postgres -d galaxy_shop -j 4 -v /backups/galaxy_shop.dump
.The -j 4
flag loads four parallel jobs; -v
prints progress.
pg_restore -h localhost -p 5432 -U postgres -C -d postgres /backups/galaxy_shop.dump
creates the database and immediately restores it.
1. Create the database.
2. psql -U postgres -d galaxy_shop -f galaxy_shop.sql
Add --no-owner
if the original roles don’t exist.Use -l
to list large objects and -j
for parallel restore.
Run quick checks: \dt
to list tables, count rows in critical tables, and inspect errors in the pg_restore output.
• Restore into a staging DB first.
• Enable maintenance_work_mem
and disable autovacuum temporarily.
• Monitor disk, CPU, and WAL generation.
• Forgetting -C
when the target DB doesn’t exist.
• Restoring into an existing DB without -c
, leaving stale objects.
Yes—add -c
(clean) to drop conflicting objects before restoring.
Use -j
for parallelism, raise maintenance_work_mem
, and restore on SSD storage.
Install the required extensions with CREATE EXTENSION
before restoring or exclude them with -E extension_name
.
.
Yes, lock out writes to avoid conflict and ensure data consistency while the restore runs.
Yes—pg_restore -t Orders -d galaxy_shop /backups/shop.dump
restores only the Orders table and dependent objects.