How to Migrate from Redshift to MySQL in PostgreSQL

Galaxy Glossary

How do I migrate data from Amazon Redshift to MySQL without downtime?

Move data from Amazon Redshift tables into a MySQL database using UNLOAD, S3, and LOAD DATA or DMS for minimal downtime.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.

Description

What are the high-level steps?

Export data from Redshift, stage it in S3, create matching MySQL tables, import the staged files, and validate row counts. Automate with AWS Data Migration Service (DMS) or a CI pipeline for repeatability.

How do I export Redshift tables?

Use UNLOAD to write comma-separated files to S3.Add quotes, specify a delimiter, and turn off parallelism for single CSVs you can easily load into MySQL.

UNLOAD ('SELECT * FROM Customers')
TO 's3://acme-dwh/exports/customers_'
IAM_ROLE 'arn:aws:iam::123456789:role/RedshiftS3Access'
DELIMITER ','
ADDQUOTES
ALLOWOVERWRITE
PARALLEL OFF;

How do I create identical MySQL schemas?

Reverse-engineer Redshift DDL with pg_dump -s or generate_series() queries, convert data types (e.g., VARCHAR maps directly, BOOLEANTINYINT(1)), then run the adjusted DDL in MySQL.

How do I load the data into MySQL?

Download or mount the S3 files, then bulk-load with LOAD DATA INFILE.Set LOCAL if the files live on the client, match delimiters, and map columns in order.

LOAD DATA LOCAL INFILE '/tmp/customers_000.csv'
INTO TABLE Customers
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(id, name, email, created_at);

Can I avoid manual exports?

Yes. AWS DMS migrates Redshift to MySQL with full-load and CDC.Define Redshift as the source, MySQL as the target, enable „ongoing replication,” and cut over when replication lag reaches zero.

How do I migrate Orders and related tables?

Repeat the UNLOAD/LOAD flow for Orders, OrderItems, and Products. Maintain foreign-key order—load parents first or disable constraints during import and re-enable afterward.

How do I validate the migration?

Run COUNT checks and sample hash comparisons. Example: SELECT COUNT(*) FROM Customers; on both sides should match.For deeper checks, hash each row and compare aggregates.

Quick row-count script

-- in Redshift
SELECT 'Customers' AS table, COUNT(*) FROM Customers
UNION ALL
SELECT 'Orders', COUNT(*) FROM Orders;
-- in MySQL
SELECT 'Customers', COUNT(*) FROM Customers
UNION ALL
SELECT 'Orders', COUNT(*) FROM Orders;

When should I use Galaxy?

Galaxy’s AI copilot can generate UNLOAD and LOAD commands, auto-adjust data types, and store migration scripts in Collections for team reuse. Endorse scripts once validated so teammates run the same trusted SQL.

.

Why How to Migrate from Redshift to MySQL in PostgreSQL is important

How to Migrate from Redshift to MySQL in PostgreSQL Example Usage


-- Export and import the Orders table
-- 1. Redshift
UNLOAD ('SELECT * FROM Orders')
TO 's3://acme-dwh/exports/orders_'
IAM_ROLE 'arn:aws:iam::123456789:role/RedshiftS3Access'
DELIMITER ',' ADDQUOTES ALLOWOVERWRITE PARALLEL OFF;

-- 2. MySQL
LOAD DATA LOCAL INFILE '/tmp/orders_000.csv'
INTO TABLE Orders
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(id, customer_id, order_date, total_amount);

How to Migrate from Redshift to MySQL in PostgreSQL Syntax


-- Redshift ➜ S3
UNLOAD ('SELECT * FROM <table_name>')
TO 's3://<bucket>/<prefix>'
IAM_ROLE '<role_arn>'
[DELIMITER ',' | '\t']
[ADDQUOTES]
[PARALLEL {ON|OFF}]
[ALLOWOVERWRITE];

-- MySQL ⇐ S3/Local
LOAD DATA [LOCAL] INFILE '<file_path>'
INTO TABLE <table_name>
FIELDS TERMINATED BY ','
[ENCLOSED BY '"']
LINES TERMINATED BY '\n'
(column1, column2, ...);

-- AWS DMS task (CLI)
aws dms create-replication-task \
  --source-endpoint-arn <redshift_ep> \
  --target-endpoint-arn <mysql_ep> \
  --migration-type full-load-and-cdc \
  --table-mappings file://mappings.json \
  --replication-task-settings file://settings.json

Common Mistakes

Frequently Asked Questions (FAQs)

Can I keep Redshift running during migration?

Yes. Use AWS DMS with change data capture so inserts, updates, and deletes replicate until you cut over.

Is UNLOAD faster with PARALLEL ON?

PARALLEL ON writes multiple files concurrently, speeding large exports. Turn it off if you need single CSVs for MySQL.

How big can each file be?

Redshift splits at 6.2 GB chunks by default with PARALLEL ON. MySQL LOAD DATA has no strict limit, but smaller (≤1 GB) files are easier to retry.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie Logo
Bauhealth Logo
Truvideo Logo
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.