Move data from Amazon Redshift tables into a MySQL database using UNLOAD, S3, and LOAD DATA or DMS for minimal downtime.
Export data from Redshift, stage it in S3, create matching MySQL tables, import the staged files, and validate row counts. Automate with AWS Data Migration Service (DMS) or a CI pipeline for repeatability.
Use UNLOAD
to write comma-separated files to S3.Add quotes, specify a delimiter, and turn off parallelism for single CSVs you can easily load into MySQL.
UNLOAD ('SELECT * FROM Customers')
TO 's3://acme-dwh/exports/customers_'
IAM_ROLE 'arn:aws:iam::123456789:role/RedshiftS3Access'
DELIMITER ','
ADDQUOTES
ALLOWOVERWRITE
PARALLEL OFF;
Reverse-engineer Redshift DDL with pg_dump -s
or generate_series()
queries, convert data types (e.g., VARCHAR
maps directly, BOOLEAN
→ TINYINT(1)
), then run the adjusted DDL in MySQL.
Download or mount the S3 files, then bulk-load with LOAD DATA INFILE
.Set LOCAL
if the files live on the client, match delimiters, and map columns in order.
LOAD DATA LOCAL INFILE '/tmp/customers_000.csv'
INTO TABLE Customers
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(id, name, email, created_at);
Yes. AWS DMS migrates Redshift to MySQL with full-load and CDC.Define Redshift as the source, MySQL as the target, enable „ongoing replication,” and cut over when replication lag reaches zero.
Repeat the UNLOAD/LOAD flow for Orders
, OrderItems
, and Products
. Maintain foreign-key order—load parents first or disable constraints during import and re-enable afterward.
Run COUNT checks and sample hash comparisons. Example: SELECT COUNT(*) FROM Customers;
on both sides should match.For deeper checks, hash each row and compare aggregates.
-- in Redshift
SELECT 'Customers' AS table, COUNT(*) FROM Customers
UNION ALL
SELECT 'Orders', COUNT(*) FROM Orders;
-- in MySQL
SELECT 'Customers', COUNT(*) FROM Customers
UNION ALL
SELECT 'Orders', COUNT(*) FROM Orders;
Galaxy’s AI copilot can generate UNLOAD and LOAD commands, auto-adjust data types, and store migration scripts in Collections for team reuse. Endorse scripts once validated so teammates run the same trusted SQL.
.
Yes. Use AWS DMS with change data capture so inserts, updates, and deletes replicate until you cut over.
PARALLEL ON writes multiple files concurrently, speeding large exports. Turn it off if you need single CSVs for MySQL.
Redshift splits at 6.2 GB chunks by default with PARALLEL ON. MySQL LOAD DATA has no strict limit, but smaller (≤1 GB) files are easier to retry.