ClickHouse Cloud Native lets you create, load, and query column-oriented tables that scale elastically and integrate with PostgreSQL via FDW or ETL.
Cloud-native ClickHouse abstracts storage, compute, and networking so that clusters scale automatically, use S3-backed storage, and expose the regular SQL interface without manual server management.
Teams keep transactional data in PostgreSQL while offloading analytics to ClickHouse.Using the clickhouse_fdw
extension or batch ETL, you query massive datasets with millisecond latency.
Create tables with the ENGINE = MergeTree
family (or ReplicatedMergeTree) and point location
to cloud object storage.Define ORDER BY
for efficient pruning.
CREATE TABLE Orders
(
id UInt64,
customer_id UInt64,
order_date Date,
total_amount Decimal(12,2)
)
ENGINE = MergeTree
ORDER BY id
SETTINGS storage_policy = 's3_mirror';
Install clickhouse_fdw
, create a foreign server that points at the cloud endpoint, then map ClickHouse tables to foreign tables. Use normal SELECT
statements.
CREATE EXTENSION IF NOT EXISTS clickhouse_fdw;
CREATE SERVER ch_cloud
FOREIGN DATA WRAPPER clickhouse_fdw
OPTIONS(host 'https://.aws.clickhouse.cloud', port '8443');.
CREATE FOREIGN TABLE ch_orders (
id bigint,
customer_id bigint,
order_date date,
total_amount numeric
) SERVER ch_cloud OPTIONS(table 'Orders');
SELECT customer_id, SUM(total_amount)
FROM ch_orders GROUP BY customer_id;
Partition by time for log-like data, use ORDER BY
on high-cardinality keys, store large blobs outside ClickHouse, and monitor system.parts
to spot merges.
Skipping the ORDER BY
clause leads to full-table scans and high costs. Forgetting to set a storage policy causes local SSD usage to explode instead of tiering to S3.
Use FDW for low-latency joins on fresh data (<1 GB).Batch-load data with INSERT INTO ... SELECT
or tools like Airbyte when you need full historical analytics.
Yes. Data is chunked into immutable parts on S3 while compute nodes pull only relevant parts, keeping queries fast and costs predictable.
All objects on S3 are encrypted with AWS KMS keys by default, satisfying SOC 2 and GDPR requirements.
.
Yes. The service runs the open-source engine, so you use identical DDL and DML.
Keep OLTP in PostgreSQL; push OLAP to ClickHouse. Use FDW or CDC pipelines to sync data.