Move data, schema, and workloads from MariaDB to Snowflake using dumps, Snowpipe, and SQL rewrite.
Snowflake separates storage and compute, scales on demand, and removes most ops work. Migrating lets teams run large analytical queries, share data securely, and cut hardware costs.
1️⃣ Export MariaDB schema & data. 2️⃣ Create matching structures in Snowflake. 3️⃣ Load data with COPY or Snowpipe. 4️⃣ Rewrite application SQL. 5️⃣ Validate counts & spot-check rows.6️⃣ Cut over and decommission MariaDB.
Use mysqldump or MariaDB’s COLUMNAR export to create compressed CSV or Parquet files. Split large tables by primary key to parallelize uploads.
Generate CREATE TABLE statements with correct data types (e.g., VARCHAR instead of TEXT, NUMBER instead of INT).Store them in a migration script and run with the Snowflake UI or SnowSQL.
Stage files in S3/Azure/GCS, then use COPY INTO. For continuous feeds, configure Snowpipe with cloud notifications so new files load automatically.
Enable binary log (binlog) replication to Kafka or Fivetran.Stream changes into a staging table, then MERGE into the final Snowflake table until the switchover date.
Replace AUTO_INCREMENT with IDENTITY or SEQUENCE. Swap NOW() for CURRENT_TIMESTAMP(). Remove engine hints and vendor-specific functions.
Run COUNT(*), MIN/MAX(id), and checksum queries on each table in both systems.Compare aggregates in a spreadsheet or with a scripted job.
After data parity checks pass, point BI tools and services to Snowflake. Keep the change feed running for rollback until confidence is high.
.
No. Dump a snapshot, then stream binlog changes to Snowflake until cut-over.
Not directly. Rewrite logic in Snowflake JavaScript UDFs or external services.
Roughly 100 GB per hour with multi-threaded uploads and Snowflake’s parallel COPY.