Move data, schema, and workloads from Snowflake to MySQL with export-load techniques, preserving integrity and performance.
Cut licensing costs, embed transactional workloads, or consolidate stack. MySQL offers open-source flexibility and can run side-by-side with application servers for low-latency access.
Grant UNLOAD privileges in Snowflake, enable AWS/GCS/Azure storage, install MySQL 8.0 with LOCAL INFILE enabled, and ensure disk space for staged CSV/Parquet files.
Use the COPY INTO @stage
command to push CSV or Parquet files to cloud storage. Include MAX_FILE_SIZE
and HEADER=TRUE
for manageable chunks and clear column mapping.
-- Snowflake
COPY INTO @s3_stage/customers_
FROM customers
FILE_FORMAT = (TYPE = CSV FIELD_DELIMITER = ',' HEADER = TRUE)
MAX_FILE_SIZE = 50000000;
Generate CREATE TABLE
statements with Snowflake metadata queries or tools like DESC TABLE
, then map Snowflake types (NUMBER, VARCHAR, TIMESTAMP_NTZ) to MySQL types (INT, VARCHAR, DATETIME).
CREATE TABLE customers (
id INT PRIMARY KEY,
name VARCHAR(255),
email VARCHAR(255),
created_at DATETIME
);
Use LOAD DATA LOCAL INFILE
with proper field and line terminators. Disable foreign-key checks during bulk loads, and import the largest tables first.
LOAD DATA LOCAL INFILE '/tmp/customers_0001.csv'
INTO TABLE customers
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
Hash each row in Snowflake and MySQL, compare aggregates, enforce UTF-8 in file formats, and stage writes behind maintenance windows.
Incorrect delimiter settings and ignoring time-zone conversion lead to mis-aligned columns and date drift. Test with small subsets first.
Yes. Export only new or changed rows using Snowflake timestamps, then use INSERT … ON DUPLICATE KEY UPDATE in MySQL.
Convert sequences to MySQL AUTO_INCREMENT columns or separate metadata tables holding last used values.
Compare row counts and CRC32 checksums per table. For critical tables, sample 1% of rows and manually validate.