The BACKUP statement saves ClickHouse tables, databases, or the entire cluster to a local or remote storage so you can restore data later.
The BACKUP statement stores table or database data and metadata in a snapshot that can be copied to local disks, S3 buckets, or HDFS. You can later RESTORE from this snapshot.
Use BACKUP before major schema changes, upgrades, or periodically to protect against data loss.Automate it with cron or orchestration tools.
Run BACKUP TABLE Orders TO Disk('local_backup')
to copy both data and metadata for the Orders
table to the local_backup
disk.
Wrap tables in parentheses: BACKUP TABLE (Orders, OrderItems) TO S3('s3_conn', 'ecom_backup')
. All listed tables are copied in one snapshot.
Use BACKUP DATABASE ecommerce TO S3('s3_conn', 'ecom_backup')
.All tables inside ecommerce
are included.
Append ON CLUSTER my_cluster
to back up data across all shards: BACKUP DATABASE ecommerce ON CLUSTER my_cluster TO ...
.
Add AS 'backup_name'
for easy lookup: BACKUP TABLE Orders AS 'orders_2024_04_20' TO Disk('local')
.
Query system.backups
for progress and errors: SELECT * FROM system.backups ORDER BY start_time DESC LIMIT 5
.
1. Store at least one off-site copy.
2. Automate with retries.
3. Monitor system.backups
for failures.
4.Version your backups using timestamped names.
5. Test RESTORE regularly.
BACKUP TABLE Orders TO S3('s3_conn','orders_daily/{date}')
. Replace {date}
with a runtime variable in your scheduler.
.
BACKUP uses snapshot isolation; writes continue but are not included after the snapshot is taken.
Yes. Configure the destination disk or S3 connection with compression enabled. ClickHouse handles compression automatically.
Use the complementary RESTORE command pointing to the same storage path and backup_name.