Move data from Microsoft SQL Server into Google BigQuery using export-to-CSV, Cloud Storage staging, and bq load commands.
Extract tables from SQL Server, upload the files to Google Cloud Storage, create equivalent BigQuery tables, load the staged files, then validate row counts and spot-check values.
Use the bcp
utility.It streams data to flat files without locking tables, making it ideal for large datasets.
bcp "SELECT * FROM dbo.Customers" queryout Customers.csv -c -t , -S sqlsrv-prod -U sa -P *****
bcp "SELECT * FROM dbo.Orders" queryout Orders.csv -c -t , -S sqlsrv-prod -U sa -P *****
Create a bucket and copy each CSV file.
gsutil mb gs://sqlserver-migration
gsutil cp *.csv gs://sqlserver-migration/
Run DDL in the BigQuery console or bq
CLI.
CREATE TABLE ecommerce.Customers ( id INT64, name STRING, email STRING, created_at DATETIME );
CREATE TABLE ecommerce.Orders ( id INT64, customer_id INT64, order_date DATE, total_amount NUMERIC );
Use bq load
with schema, format, and skipping header rows.
bq --location=US load --source_format=CSV --skip_leading_rows=1 \
ecommerce.Customers gs://sqlserver-migration/Customers.csv \
id:INT64,name:STRING,email:STRING,created_at:DATETIME
Compare row counts and sample aggregates between systems.
-- BigQuery
SELECT COUNT(*) FROM ecommerce.Customers;
-- SQL Server
SELECT COUNT(*) FROM dbo.Customers;
Freeze writes during final sync, script everything for repeatability, enable BigQuery partitioning for large fact tables, and document data-type mappings.
.
Yes. Use SQL Server Change Data Capture with Dataflow or Cloud Data Fusion to stream deltas into BigQuery.
Gzipping reduces network time and BigQuery accepts compressed files natively. Balance CPU cost against transfer savings.
Export them as regular INTs, then mark the BigQuery field as GENERATED AS IDENTITY
if you need auto-increment behavior for future inserts.