Copies a table’s schema and data from one Amazon Redshift database to another via UNLOAD/COPY, Data Sharing, or CREATE TABLE AS.
Copying lets you migrate data to a new cluster, refresh staging environments, or share read-only datasets without exposing the entire source database.
Three reliable choices exist: 1) UNLOAD to Amazon S3 then COPY on the target, 2) Redshift Data Sharing for same-region clusters, and 3) CREATE TABLE AS using Redshift Spectrum or federated queries.
Use UNLOAD/COPY when clusters are in different AWS accounts or regions, or when you need full control over file format, compression, and parallelism.
The source and target clusters need an IAM role with GetObject, PutObject, and ListBucket on the S3 bucket, plus USAGE/SELECT on the source schema.
Before running COPY, create the table on the target with identical DISTKEY, SORTKEY, and ENCODE settings using SHOW CREATE TABLE output from the source.
1) UNLOAD the data to S3 in Parquet. 2) Optionally grant bucket access to the target account. 3) Recreate the table definition on the target. 4) COPY the files into the new table.
Yes. Pass a WHERE clause inside the UNLOAD query, e.g., order_date > current_date - 30.
Check STL_UNLOAD_LOG and STL_LOAD_COMMITS views. They show file counts, row counts, and errors.
Unload in Parquet with MAXFILESIZE 250 MB, set COMPUPDATE OFF during COPY, and leverage manifest files to avoid missing objects.
Wrap UNLOAD and COPY commands in an AWS Step Functions workflow or a Galaxy SQL notebook scheduled by cron.
Yes when clusters are in the same region and account, because no data lands on S3. However, Data Sharing provides read-only access and cannot move data permanently.
Yes. Drop or TRUNCATE the target table before COPY, or UNLOAD to a new schema and swap schemas after load completes.