Backup Snowflake with zero-copy clones, Time Travel, and external exports to guarantee fast, granular recovery.
Time Travel only keeps historical data for 1–90 days. Backups let you restore data after that window, move data between accounts, or meet legal retention rules.
Three options: (1) zero-copy CLONE
for instant, space-efficient snapshots; (2) COPY INTO
to export Parquet or CSV files to S3 for off-platform storage; (3) scheduled TASK
chains that automate either approach.
Run CREATE DATABASE myshop_backup CLONE myshop AT (TIMESTAMP => TO_TIMESTAMP_TZ('2024-05-01 00:00:00'));
. The clone appears immediately and costs nothing until data changes.
Clone individual tables: CREATE OR REPLACE TABLE customers_bak CLONE myshop.public.customers;
. Repeat for orders
, products
, and orderitems
.
1) CREATE STAGE s3_bak URL='s3://my-bucket/backup' CREDENTIALS=(AWS_KEY_ID='...' AWS_SECRET_KEY='...');
2) COPY INTO @s3_bak FROM orders FILE_FORMAT=(TYPE=PARQUET) OVERWRITE=TRUE;
Create a TASK
that runs daily and calls either CREATE ... CLONE
or COPY INTO
. Example: CREATE TASK daily_db_clone SCHEDULE='USING CRON 0 5 * * * UTC' AS CREATE OR REPLACE DATABASE myshop_backup CLONE myshop;
Use descriptive names with dates (myshop_backup_20240501
), store exports in versioned buckets, encrypt objects, and test restores monthly.
No. Clones share micro-partitions with the source until data diverges. You only pay for changed data.
Not directly. Use CREATE DATABASE ... CLONE
in the same account, then SNOWFLAKE ACCOUNT REPLICATION
or external unload to move data.
Retention depends on your S3 lifecycle rules. Set policies to transition older backups to Glacier and delete after your compliance window.