Creates an isolated Redshift schema or cluster that mirrors production data for safe testing and development.
Staging lets engineers validate schema changes, run heavy reports, and test ETL without endangering production performance or data integrity.
Create a separate staging
schema inside the same cluster for lightweight testing. Spin up a duplicate cluster when you need full isolation, performance benchmarking, or disaster recovery drills.
Run CREATE SCHEMA staging AUTHORIZATION your_user;
then duplicate each table with CREATE TABLE staging.table_name (LIKE public.table_name);
to copy structure only.
Use INSERT INTO staging.table_name SELECT * FROM public.table_name;
for small sets. For large volumes, UNLOAD
production data to S3 and COPY
it into the staging cluster in parallel.
Grant developers SELECT
and temporary INSERT
on staging tables. Revoke UPDATE
/DELETE
unless specifically required to avoid accidental data loss.
Automate a DROP SCHEMA staging CASCADE;
followed by a rebuild using the syntax below. Trigger via AWS Scheduler or CI/CD to keep staging in sync with production.
Tag staging resources, monitor disk usage, and enforce a data-retention policy. Compress unloaded files with gzip
to cut S3 costs.
Yes. Create those tables in staging
with CREATE TABLE ... LIKE
and copy data selectively. This speeds up refreshes and saves storage.
If you use the same cluster, heavy staging queries can consume resources. Set WLM
to give staging a low-priority queue or use a separate cluster.
Daily is typical, but high-change apps may need hourly. Balance data freshness with cost and cluster load.