Step-by-step process to export data from ClickHouse and import it into Oracle with minimal downtime.
Oracle offers mature PL/SQL, strong ACID compliance, and rich ecosystem features that some enterprises need for transactional workloads, auditing, or vendor policy.Migrating retains historical analytics from ClickHouse while unlocking Oracle's OLTP strengths.
Typical workflow: 1) analyse ClickHouse schemas, 2) map data types to Oracle, 3) export data (CSV, Parquet, or Avro), 4) create matching tables in Oracle, 5) bulk-load files with SQL*Loader or Data Pump, 6) validate counts/checksums, 7) cut over traffic using DB links or ETL.
Run SELECT … INTO OUTFILE for each table.Compress for large volumes.
SELECT *
FROM Customers
INTO OUTFILE '/tmp/customers.csv'
FORMAT CSV;
Repeat for Orders, Products, and OrderItems. Store files where Oracle server can read.
Translate ClickHouse data types.Keep primary keys and indexes minimal until after load.
CREATE TABLE customers (
id NUMBER PRIMARY KEY,
name VARCHAR2(100),
email VARCHAR2(255),
created_at TIMESTAMP
);
Repeat for the Orders, Products, and OrderItems tables.
Generate a control file per table.
-- customers.ctl
OPTIONS (SKIP=1)
LOAD DATA
INFILE '/tmp/customers.csv'
INTO TABLE customers
FIELDS TERMINATED BY ','
(id, name, email, created_at "TO_TIMESTAMP(:created_at, 'YYYY-MM-DD HH24:MI:SS')")
Execute:
sqlldr userid=system/password control=customers.ctl parallel=true direct=true
SQL*Loader's direct path and parallel flags speed up bulk import.
To minimise downtime, create an ODBC gateway or Oracle Heterogeneous Services DB link to ClickHouse.Then copy deltas after the bulk load.
CREATE DATABASE LINK ch_link CONNECT TO "ch_user" IDENTIFIED BY "pwd"
USING 'clickhouse_dsn';.
INSERT /*+ APPEND */ INTO orders o
SELECT * FROM orders@ch_link
WHERE order_date > TO_DATE('2023-10-01','YYYY-MM-DD');
Schedule incremental jobs until cut-over.
Freeze ClickHouse writes briefly, load final deltas, switch application connection strings, and monitor lag KPIs. Keep both systems running read-only initially for rollback safety.
Data-type mismatch: ClickHouse DateTime64
exceeds Oracle TIMESTAMP
precision. Map explicitly or round values.
Loading with constraints enabled: Foreign-key checks slow imports.Disable constraints, load, re-enable and validate.
Not required for small datasets; DB links or ETL suffice. GoldenGate simplifies real-time replication at enterprise scale.
Compare row counts, run checksums per table, and spot-check critical reports in both systems before decomissioning ClickHouse.
.
No. Recreate indexes in Oracle after data load; ClickHouse index formats are incompatible.
Yes. Use cron-driven SQL*Loader jobs or Oracle Scheduler with INSERT … SELECT across DB links.
Create Oracle SEQUENCE objects to continue auto-incrementing IDs once the cut-over happens.