How to Migrate from MySQL to Snowflake in PostgreSQL

Galaxy Glossary

How do I migrate a MySQL database to Snowflake?

Migrating from MySQL to Snowflake exports data from MySQL, stages it in cloud storage, and loads it into Snowflake with COPY INTO while reproducing schema and constraints.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.

Description

Why migrate from MySQL to Snowflake?

Snowflake separates storage and compute, autoscaling queries, and offers zero-maintenance clustering, making analytics faster and cheaper than a self-hosted or RDS MySQL setup when data grows.

What are the major migration steps?

1) Export MySQL tables to CSV/Parquet. 2) Upload files to cloud storage. 3) Create matching Snowflake tables. 4) Define a stage. 5) COPY INTO the tables. 6) Validate row counts & checksums.

How do I export MySQL data safely?

Lock tables, run mysqldump --no-create-info --tab=/tmp or SELECT ... INTO OUTFILE, and compress each file. Record row totals for post-load validation.

Example: Export Customers to CSV

mysqldump -h mysql.prod -u admin -p shop_db Customers \ --no-create-info --tab=/tmp \ --fields-terminated-by=',' --fields-enclosed-by='"'

How do I create equivalent Snowflake tables?

Translate data types: INT → INTEGER, VARCHAR(n) → STRING, DATETIME → TIMESTAMP_NTZ. Declare primary keys as comments because Snowflake enforces them logically, not physically.

Example: Customers table DDL

CREATE OR REPLACE TABLE customers ( id INTEGER, name STRING, email STRING, created_at TIMESTAMP_NTZ);

How do I stage the exported files?

Upload the CSVs to S3, GCS, or Azure. Then create a stage with credentials and file format so Snowflake can reach the files securely.

Stage creation

CREATE OR REPLACE STAGE mysql_export URL='s3://shop-mysql/' CREDENTIALS=(aws_key_id='•••' aws_secret_key='•••') FILE_FORMAT=(TYPE=CSV FIELD_DELIMITER=',' SKIP_HEADER=0);

How do I load data into Snowflake?

Run COPY INTO for each table, reference the stage path, and set ON_ERROR to CONTINUE or ABORT_STATEMENT depending on quality needs.

Copy Customers data

COPY INTO customersFROM @mysql_export/Customers.csvON_ERROR=CONTINUE;

How do I verify the migration?

Compare SELECT COUNT(*) and SUM(CRC32(col_list)) between MySQL and Snowflake, run sample queries, and test application read paths before switching traffic.

Best practices for a smooth migration

Break huge tables into ≤100 MB files, enable multi-cluster warehouses during loads, script every step for repeatability, and maintain a rollback snapshot until validation passes.

Why How to Migrate from MySQL to Snowflake in PostgreSQL is important

How to Migrate from MySQL to Snowflake in PostgreSQL Example Usage


-- Migrate Orders and OrderItems in one script
CREATE OR REPLACE TABLE orders (
  id INTEGER,
  customer_id INTEGER,
  order_date DATE,
  total_amount NUMBER(12,2)
);

CREATE OR REPLACE TABLE orderitems (
  id INTEGER,
  order_id INTEGER,
  product_id INTEGER,
  quantity INTEGER
);

COPY INTO orders
FROM @mysql_export/Orders.csv
ON_ERROR=CONTINUE;

COPY INTO orderitems
FROM @mysql_export/OrderItems.csv
ON_ERROR=CONTINUE;

How to Migrate from MySQL to Snowflake in PostgreSQL Syntax


-- 1. Export data from MySQL
mysqldump -h <mysql_host> -u <user> -p shop_db Products \
  --no-create-info --tab=/tmp \
  --fields-terminated-by=',' --fields-enclosed-by='"'

-- 2. Upload the file to S3
aws s3 cp /tmp/Products.txt s3://shop-mysql/Products.csv

-- 3. Declare an external stage in Snowflake
CREATE OR REPLACE STAGE mysql_export
  URL='s3://shop-mysql/'
  CREDENTIALS=(aws_key_id='AWS_KEY' aws_secret_key='AWS_SECRET')
  FILE_FORMAT=(TYPE=CSV FIELD_DELIMITER=',' SKIP_HEADER=0);

-- 4. Re-create the table structure
CREATE OR REPLACE TABLE products (
  id INTEGER,
  name STRING,
  price NUMBER(10,2),
  stock INTEGER
);

-- 5. Load the data
COPY INTO products
FROM @mysql_export/Products.csv
ON_ERROR = CONTINUE
PURGE = FALSE;

Common Mistakes

Frequently Asked Questions (FAQs)

Can I migrate while MySQL is live?

Yes. Perform an initial bulk load, then capture binlog changes with Snowpipe or a CDC tool until cutover.

How big can the files be?

Snowflake ingests files up to 16 GB compressed, but 100 MB–1 GB files load faster in parallel and retry more gracefully.

What about AUTO_INCREMENT columns?

Store the values directly; Snowflake does not auto-generate IDs. If you need new IDs post-migration, create a SEQUENCE and update rows.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie
BauHealth Logo
Truvideo Logo
Welcome to the Galaxy, Guardian!
Oops! Something went wrong while submitting the form.