Move schemas and data from SQL Server to ClickHouse using CSV, ODBC, or INSERT SELECT for high-speed analytics.
ClickHouse delivers sub-second analytics on billions of rows. Columnar storage, vectorized execution, and compression lower storage cost and speed up aggregation queries compared with SQL Server.
Export schema ➜ map data types ➜ create tables in ClickHouse ➜ transfer data ➜ verify ➜ optimize.
Run bcp
or sqlcmd
to dump each table to CSV.Example:
bcp "SELECT * FROM dbo.Customers" queryout customers.csv -c -t"," -S sql-prod -U sa -P P@ssw0rd
Replace SQL Server types with ClickHouse types and add an ORDER BY
key:
CREATE TABLE Customers
(
id UInt32,
name String,
email String,
created_at DateTime
) ENGINE = MergeTree
ORDER BY id;
CREATE TABLE Orders
(
id UInt32,
customer_id UInt32,
order_date Date,
total_amount Decimal(12,2)
) ENGINE = MergeTree
ORDER BY (customer_id, order_date);
Use clickhouse-client --query
with INSERT FORMAT CSV
:
clickhouse-client --query "INSERT INTO Customers FORMAT CSV" < customers.csv
Yes.Use the odbc
table function to stream directly:
INSERT INTO Customers
SELECT *
FROM odbc('Driver=ODBC Driver 18 for SQL Server;Server=sql-prod;Uid=sa;Pwd=P@ssw0rd;',
'dbo', 'Customers');
Run SELECT count()
on each side and compare.
Run OPTIMIZE TABLE ...FINAL
to merge parts, set proper primary_key
and partition_by
, and add materialized views for rollups.
Schedule SQL Server Change Tracking exports or use an ETL tool like Debezium ➜ Kafka ➜ ClickHouse’s KafkaEngine
.
.
No. ClickHouse is optimized for analytics, not OLTP. Keep transactional workloads in SQL Server and move reporting tables.
Most BI tools support ClickHouse via JDBC/ODBC. Update the connection string and adjust any T-SQL-specific syntax.
Use UInt32/UInt64 and generate IDs in your app or derive them from sequences in SQL Server before loading.