backup ParadeDB creates a portable dump of all tables, indexes, and extension objects created by ParadeDB so you can restore them on another PostgreSQL server.
ParadeDB stores its data inside ordinary PostgreSQL tables. Using the native pg_dump
utility guarantees a consistent snapshot, includes extension metadata, and lets you restore on any compatible server.
Run pg_dump --format=custom --create --file=paradedb.backup your_db
. The custom format supports parallel restore, compression, and integrity checks—ideal for large vector or search indexes built by ParadeDB.
Filter by schema or extension: pg_dump --schema=paradedb --file=paradedb_objects.sql your_db
. This keeps the dump small while preserving all ParadeDB-specific functions, tables, and indexes.
Yes. Create a cron job or CI pipeline that calls pg_dump
daily, uploads the file to S3, and uses --clean --if-exists
flags to simplify automated restores.
Use pg_restore --dbname=target_db --create paradedb.backup
. The --create
flag rebuilds the database, reinstalls the ParadeDB extension, and replays all data.
Automate a nightly restore into a staging server. Verify that vector search queries still return expected results from Products
or Orders
.
Always back up extension metadata (--create
) along with table data. Omitting this causes CREATE FUNCTION
errors on restore.
Save dumps outside the database host. Popular options are AWS S3, GCS, or an on-prem object store. Encrypt files at rest using gpg
or S3 server-side encryption.
No. It acquires a short ACCESS SHARE lock, allowing reads and writes during the dump.
Yes. Use -Z9 for maximum compression or pipe to gzip
.
Yes. Add -j4 (or higher) to speed up large ParadeDB datasets.