Transfer data from Amazon Redshift clusters into Microsoft SQL Server databases using export-to-S3 and bulk-load techniques.
Export tables to Amazon S3 with Redshift UNLOAD, then copy the files to SQL Server and load them with BULK INSERT or OPENROWSET. This keeps the process fully SQL-driven and parallel.
Both commands are native, free, scriptable, and avoid extra latency. They also preserve column types closely when you control the file format.
UNLOAD ($$
SELECT id, name, email, created_at
FROM public.customers
$$)
TO 's3://galaxy-migrations/customers_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV
HEADER
DELIMITER ',';
Use AWS CLI or aws s3 sync
. Example:
aws s3 cp s3://galaxy-migrations/ C:\exports\customers\ --recursive
BULK INSERT dbo.Customers
FROM 'C:\exports\customers\customers_000.csv'
WITH (
FORMAT='CSV',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2 --skip header
);
Create a shell or PowerShell loop that lists Redshift tables, generates UNLOAD + BULK INSERT for each, and logs progress. Pair with SQL Agent for scheduling.
Redshift BOOLEAN
becomes SQL Server BIT
; SUPER
requires NVARCHAR or JSON parsing. Define explicit CASTs in the UNLOAD SELECT to match SQL Server targets.
Run SELECT COUNT(*)
on both systems after load, or insert counts into a control table during automation.
Split each table’s export into multiple files with UNLOAD’s default parallelism. On SQL Server, enable TABLOCK and batch sizes in BULK INSERT for speed.
Grant the Redshift IAM role write access only to a dedicated S3 prefix. Remove the files or rotate credentials after migration.
Yes. After the initial bulk migration, schedule incremental UNLOADs of new or changed rows (based on created_at
/updated_at
) and BULK INSERT into staging tables followed by MERGE.
Choose DMS if you need continuous replication with minimal scripting. For one-off or periodic loads, UNLOAD + BULK INSERT remains quicker and cheaper.
Wrong delimiter: Using tabs in UNLOAD but commas in BULK INSERT causes mis-aligned columns. Always match DELIMITER
values.
Missing NULL handling: Forgetting NULL AS '\\N'
in UNLOAD leaves empty strings that SQL Server treats as text. Specify the NULL keyword consistently on both sides.
UNLOAD + BULK INSERT offers a lean, scriptable path from Redshift to SQL Server. Adopt incremental loads and validation queries to keep migrations reliable.
No. You can also use COPY
to S3 or AWS Database Migration Service, but UNLOAD allows full SELECT control.
Yes. Add GZIP
to the UNLOAD options and specify DATA_COMPRESSION='GZIP'
in BULK INSERT (SQL Server 2022+) or decompress before loading.
Split each table into multiple slices with UNLOAD's parallelism, use multiple BULK INSERT statements inside a transaction, and consider increasing SQL Server's MAXDOP.