How to Migrate from Postgres to ClickHouse

Galaxy Glossary

How do I migrate data from PostgreSQL to ClickHouse?

Export data from PostgreSQL, recreate tables in ClickHouse, load files, and validate counts for a fast, column-oriented analytics workload.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Welcome to the Galaxy, Guardian!
You'll be receiving a confirmation email

Follow us on twitter :)
Oops! Something went wrong while submitting the form.

Description

Table of Contents

Why migrate from PostgreSQL to ClickHouse?

ClickHouse delivers sub-second analytics over billions of rows thanks to columnar storage and vectorized execution. Moving large, read-heavy datasets out of Postgres removes VACUUM overhead and frees your OLTP database for transactional traffic.

Which migration methods work best?

For one-off moves, use COPY ... TO STDOUT in Postgres and clickhouse-client --query "INSERT ... FORMAT CSV". For ongoing syncs, rely on MaterializedPostgreSQL engine or Debezium CDC into Kafka.

Step 1 — Export data from Postgres with COPY

Run COPY in parallel per table to a local CSV or to STDOUT piped into gzip. Text and CSV formats are fastest; include HEADER for clarity.

Example

COPY Customers TO '/tmp/customers.csv' WITH (FORMAT csv, HEADER true);COPY Orders TO '/tmp/orders.csv' WITH (FORMAT csv, HEADER true);

Step 2 — Create equivalent tables in ClickHouse

Pick a MergeTree engine keyed by frequently-filtered columns. Data types must match Postgres originals; replace serial with UInt32/UInt64 and timestamp with DateTime.

Example

CREATE TABLE Customers ( id UInt32, name String, email String, created_at DateTime) ENGINE = MergeTreeORDER BY id;

Step 3 — Load the exported files

Use clickhouse-client with --query "INSERT INTO … FORMAT CSV". Add --max_insert_block_size to tune batch size for large loads.

Example

clickhouse-client --query "INSERT INTO Customers FORMAT CSV" < /tmp/customers.csvclickhouse-client --query "INSERT INTO Orders FORMAT CSV" < /tmp/orders.csv

Step 4 — Validate row counts

Compare counts in both systems before unlocking production. Use SELECT count(*) on ClickHouse and Postgres. Mismatches usually mean NULL text fields or delimiter issues.

How do I handle incremental sync?

Create a MaterializedPostgreSQL database to stream changes into ClickHouse automatically. Alternatively, capture WAL changes with Debezium → Kafka → ClickHouse Kafka engine.

Best practices for a smooth cut-over

Disable foreign keys in Postgres exports, cast all dates to UTC, compress files with gzip for network transfers, and load smallest dimension tables first to support early query tests.

What performance gains can I expect?

Dashboards that scanned millions of Postgres rows typically drop to milliseconds in ClickHouse. Storage footprint shrinks 3-8× with default compression.

Why How to Migrate from Postgres to ClickHouse is important

How to Migrate from Postgres to ClickHouse Example Usage


-- Export Orders and OrderItems from Postgres
COPY Orders      TO '/tmp/orders.csv'      WITH (FORMAT csv, HEADER true);
COPY OrderItems  TO '/tmp/order_items.csv' WITH (FORMAT csv, HEADER true);

-- Recreate in ClickHouse
CREATE TABLE Orders (
  id UInt32,
  customer_id UInt32,
  order_date DateTime,
  total_amount Decimal(10,2)
) ENGINE = MergeTree
ORDER BY (id);

CREATE TABLE OrderItems (
  id UInt32,
  order_id UInt32,
  product_id UInt32,
  quantity UInt8
) ENGINE = MergeTree
ORDER BY (order_id, id);

-- Import
clickhouse-client --query "INSERT INTO Orders FORMAT CSV"      < /tmp/orders.csv;
clickhouse-client --query "INSERT INTO OrderItems FORMAT CSV" < /tmp/order_items.csv;

How to Migrate from Postgres to ClickHouse Syntax


-- 1. Export from Postgres
COPY <table_name> TO '/path/table.csv' WITH (FORMAT csv, HEADER true, DELIMITER ',');

-- 2. Recreate table in ClickHouse
CREATE TABLE <table_name> (
    column_name1 DataType,
    column_name2 DataType,
    ...
) ENGINE = MergeTree
ORDER BY (primary_key_cols);

-- 3. Import data
clickhouse-client --query "INSERT INTO <table_name> FORMAT CSV" < /path/table.csv;

-- 4. Validate counts
SELECT count() FROM <table_name>;

Common Mistakes

Frequently Asked Questions (FAQs)

Is there a direct pg_dump import tool?

No. pg_dump creates INSERT statements not understood by ClickHouse. Use COPY or logical replication engines instead.

Can I migrate indexes?

Traditional B-tree indexes are unnecessary. ClickHouse’s sparse index on the ORDER BY key and data skipping granules replace most secondary indexes.

How do I keep both databases in sync?

Set up MaterializedPostgreSQL or stream WAL changes with Debezium into Kafka and let ClickHouse consume the topic.

Want to learn about other SQL terms?

Trusted by top engineers on high-velocity teams
Aryeo Logo
Assort Health
Curri
Rubie Logo
Bauhealth Logo
Truvideo Logo
Welcome to the Galaxy, Guardian!
You'll be receiving a confirmation email

Follow us on twitter :)
Oops! Something went wrong while submitting the form.