How to Migrate from Redshift to SQLServer in PostgreSQL

Galaxy Glossary

How do I migrate data from Redshift to SQL Server using SQL commands only?

Transfer data from Amazon Redshift clusters into Microsoft SQL Server databases using export-to-S3 and bulk-load techniques.

Sign up for the latest in SQL knowledge from the Galaxy Team!

Description

What is the fastest way to move Redshift data into SQL Server?

Export tables to Amazon S3 with Redshift UNLOAD, then copy the files to SQL Server and load them with BULK INSERT or OPENROWSET. This keeps the process fully SQL-driven and parallel.

Why choose UNLOAD + BULK INSERT over ETL tools?

Both commands are native, free, scriptable, and avoid extra latency. They also preserve column types closely when you control the file format.

How do I export an ecommerce table from Redshift?

UNLOAD ($$
SELECT id, name, email, created_at
FROM public.customers
$$)
TO 's3://galaxy-migrations/customers_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV
HEADER
DELIMITER ',';

How do I copy the S3 files to my SQL Server host?

Use AWS CLI or aws s3 sync. Example:

aws s3 cp s3://galaxy-migrations/ C:\exports\customers\ --recursive

What BULK INSERT statement loads the data?

BULK INSERT dbo.Customers
FROM 'C:\exports\customers\customers_000.csv'
WITH (
FORMAT='CSV',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2 --skip header
);

How do I automate dozens of tables?

Create a shell or PowerShell loop that lists Redshift tables, generates UNLOAD + BULK INSERT for each, and logs progress. Pair with SQL Agent for scheduling.

What data types need special care?

Redshift BOOLEAN becomes SQL Server BIT; SUPER requires NVARCHAR or JSON parsing. Define explicit CASTs in the UNLOAD SELECT to match SQL Server targets.

How can I validate that row counts match?

Run SELECT COUNT(*) on both systems after load, or insert counts into a control table during automation.

Best practices for large tables?

Split each table’s export into multiple files with UNLOAD’s default parallelism. On SQL Server, enable TABLOCK and batch sizes in BULK INSERT for speed.

What security measures are required?

Grant the Redshift IAM role write access only to a dedicated S3 prefix. Remove the files or rotate credentials after migration.

Can I keep Redshift and SQL Server in sync?

Yes. After the initial bulk migration, schedule incremental UNLOADs of new or changed rows (based on created_at/updated_at) and BULK INSERT into staging tables followed by MERGE.

When should I use AWS DMS instead?

Choose DMS if you need continuous replication with minimal scripting. For one-off or periodic loads, UNLOAD + BULK INSERT remains quicker and cheaper.

Common mistakes and fixes

Wrong delimiter: Using tabs in UNLOAD but commas in BULK INSERT causes mis-aligned columns. Always match DELIMITER values.

Missing NULL handling: Forgetting NULL AS '\\N' in UNLOAD leaves empty strings that SQL Server treats as text. Specify the NULL keyword consistently on both sides.

Conclusion

UNLOAD + BULK INSERT offers a lean, scriptable path from Redshift to SQL Server. Adopt incremental loads and validation queries to keep migrations reliable.

Why How to Migrate from Redshift to SQLServer in PostgreSQL is important

How to Migrate from Redshift to SQLServer in PostgreSQL Example Usage


-- End-to-end example for OrderItems table
-- 1. Redshift side
UNLOAD ($$SELECT id, order_id, product_id, quantity
        FROM public.orderitems$$)
TO 's3://galaxy-migrations/orderitems_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV HEADER;

-- 2. Copy files locally
aws s3 cp s3://galaxy-migrations/orderitems_ C:\exports\orderitems\ --recursive

-- 3. SQL Server load
BULK INSERT dbo.OrderItems
FROM 'C:\exports\orderitems\orderitems_000.csv'
WITH (
    FORMAT='CSV',
    FIELDTERMINATOR=',',
    ROWTERMINATOR='\n',
    FIRSTROW=2,
    ERRORFILE='C:\exports\orderitems\errors.log'
);

How to Migrate from Redshift to SQLServer in PostgreSQL Syntax


-- Export from Redshift to S3
UNLOAD ($$SELECT * FROM public.orders$$)
TO 's3://galaxy-migrations/orders_'
IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftUnloadRole'
FORMAT AS CSV
HEADER
DELIMITER ','
NULL AS '\\N';

-- Move files to SQL Server host (shell)
aws s3 sync s3://galaxy-migrations/ C:\exports\orders\

-- Import into SQL Server
BULK INSERT dbo.Orders
FROM 'C:\exports\orders\orders_000.csv'
WITH (
  FORMAT='CSV',
  FIELDTERMINATOR=',',
  ROWTERMINATOR='\n',
  FIRSTROW=2,
  TABLOCK,
  BATCHSIZE=50000
);

Common Mistakes

Frequently Asked Questions (FAQs)

Is UNLOAD the only way to export from Redshift?

No. You can also use COPY to S3 or AWS Database Migration Service, but UNLOAD allows full SELECT control.

Can I compress the files to speed transfer?

Yes. Add GZIP to the UNLOAD options and specify DATA_COMPRESSION='GZIP' in BULK INSERT (SQL Server 2022+) or decompress before loading.

How do I migrate terabytes of data?

Split each table into multiple slices with UNLOAD's parallelism, use multiple BULK INSERT statements inside a transaction, and consider increasing SQL Server's MAXDOP.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie
BauHealth Logo
Truvideo Logo