Lists the least-power hardware, storage, and OS specs needed to install and run ClickHouse without performance bottlenecks.
Confirming hardware and OS specs before installation prevents wasted time on failed builds, unexpected crashes, and poor query performance. Matching ClickHouse’s needs with your infrastructure lets you scale confidently and avoid emergency upgrades.
For local development, allocate 2 vCPUs, 4 GB RAM, and 20 GB of SSD. Production workloads need at least 8 vCPUs, 32 GB RAM, and NVMe SSDs sized for 3× your expected data volume to accommodate merges and backups.
ClickHouse parallelizes queries heavily. Two cores suffice for testing, but every serious workload benefits from 8+ modern x86_64 or ARM64 cores with AVX2 support. Reserve one core for system processes.
The server must hold part of each working dataset in memory. Four GB works only for demos. Allocate 1 GB per active thread, with 32 GB as a safe starting point for analytics clusters.
Yes. Spinning disks throttle merge-tree engines. Use SATA SSD at minimum; NVMe halves query latency. Ensure ext4 or XFS file systems and Enable `noatime` to cut write amplification.
ClickHouse officially supports 64-bit Linux kernels ≥3.10 with glibc ≥ 2.17 (Ubuntu 18.04+, Debian 10+, CentOS 7.6+). macOS and Windows are for development only, using Docker or Homebrew.
A single node runs fine on 1 Gbps LAN. For clusters, use 10 Gbps to avoid replication lag. Latency under 0.5 ms between shards keeps distributed queries fast.
Benchmark typical queries on a staging node, then scale out horizontally. Keep data disks at <70 % capacity, separate WAL to its own SSD, and monitor merge times. Upgrade RAM before CPU when memory usage nears 75 %.
After provisioning, run `clickhouse local --query "SELECT version()"` to confirm binaries start. Use lsblk -d -o name,rota
to ensure disks are SSD (ROTA=0) and nproc
to check CPU cores.
An ecommerce team ingesting 5 M order rows daily should begin with 16 vCPUs, 64 GB RAM, and 1 TB NVMe. This setup supports real-time dashboards and six-month retention without re-sharding.
Never deploy ClickHouse on HDDs—they slow inserts by 5×. Also, do not underprovision RAM; merges will stall and queries time out when the OS starts swapping.
Only for experimentation. ARM64 builds work but limited RAM and I/O make it unsuitable for production analytics.
Yes, if you mount host SSDs directly and set --ulimit nofile
high. Avoid shared storage drivers that degrade latency.
Add shards with identical hardware, enable replication, and use the Distributed
engine to query across nodes seamlessly.