Move data, schema, and routines from Snowflake to MariaDB using unload files, type mapping, and bulk-load commands.
Lower license cost, on-prem installation, and MySQL compatibility often drive teams to move analytics or OLTP workloads from Snowflake to MariaDB.
1) Extract Snowflake objects
2) Unload table data
3) Transform schema & data types
4) Bulk-load into MariaDB
5) Validate row counts & queries.
Use COPY INTO 's3://bucket/path/' FILE_FORMAT=(TYPE=CSV FIELD_OPTIONALLY_ENCLOSED_BY='"')
to create compressed CSV files partitioned by table.
SELECT catalog metadata from INFORMATION_SCHEMA.COLUMNS
, map Snowflake types (NUMBER → BIGINT, VARCHAR → VARCHAR, TIMESTAMP_NTZ → DATETIME), and emit CREATE TABLE
DDL.
Snowflake doesn’t enforce keys, so manually add PRIMARY KEY
and INDEX
clauses in the generated DDL based on your application logic or foreign-key columns.
LOAD DATA LOCAL INFILE 'file.csv' INTO TABLE tab FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'
yields the fastest ingest when run with --local-infile
.
1) COPY INTO '@~/orders.csv.gz' FROM Orders;
2) Download the file; gunzip it.
3) LOAD DATA LOCAL INFILE 'orders.csv' INTO TABLE Orders;
Snowflake SQL differs from MariaDB; rewrite views with MySQL syntax and convert JavaScript or Snowflake Scripting procedures into MariaDB’s SQL/PSM.
Run row-count checks (SELECT COUNT(*)
) and sample aggregations (SUM, MIN, MAX) on critical columns in both systems and compare results.
Split files by date or ID range, disable indexes during load, set innodb_buffer_pool_size
high, and wrap loads in transactions to allow quick rollback.
COPY INTO is Snowflake’s unload command; it writes to cloud storage. There is no separate UNLOAD function.
Convert to UTC in Snowflake using ::timestamp_ntz AT TIME ZONE 'UTC'
before export, then load into MariaDB DATETIME or TIMESTAMP.