Step-by-step process to export data from Snowflake and load it into Microsoft SQL Server with schema fidelity.
Teams often move to SQL Server when they need on-prem control, tighter integration with .NET apps, or reduced compute cost. SQL Server offers native replication, SSIS, and broad BI tool support while Snowflake excels at elastic scaling.
Create a read-only role, grant it SELECT on every target table, and stage sufficient warehouse credits. This limits accidental writes and guarantees consistent snapshots during export.
Use COPY INTO to unload each table into compressed CSV/Parquet on an external stage (S3, Azure Blob, GCS). Partition large tables by date columns to parallelize downloads.
Generate DDL from Snowflake’s information_schema, then map datatypes: VARCHAR → NVARCHAR, NUMBER → DECIMAL, TIMESTAMP_NTZ → DATETIME2. Create tables in a dedicated schema to isolate the load.
Use BULK INSERT, bcp, or OPENROWSET(BULK…) to load each CSV. Keep batch sizes ≤100 MB to avoid log bloat and use TABLOCK for faster inserts.
Compare COUNT(*) and CRC32 hashes between Snowflake and SQL Server. Store results in an audit table. Any mismatch triggers a reload of the affected partition.
SQL Server Integration Services (SSIS), Azure Data Factory, and Flyway handle schema creation, file movement, and incremental loads. They also support retry logic and alerting.
Schedule a short read-only window in Snowflake, re-export delta data, replay CDC into SQL Server, then switch application connection strings. Keep Snowflake live as a fallback for 24 hours.
No. You must unload to files or use a data pipeline tool. Direct cross-cloud copy is not supported.
Yes. Add UPDATED_AT columns, export only changed partitions, and use MERGE in SQL Server to upsert.
Views can be scripted, but Snowflake JavaScript procedures need manual rewrites to T-SQL.