The error appears when a query needs more RAM than the limits defined by ClickHouse settings such as max_memory_usage.
ClickHouse aborts a query when its RAM demand surpasses the limit set by max_memory_usage
(session) or max_memory_usage_for_user
(user). The default may be lower than your dataset or JOIN/ORDER BY operation requires.
Add a SETTINGS
clause or run a SET
statement. This scopes the change to the current session or single statement, keeping cluster-wide defaults safe.
SELECT o.id, c.name, SUM(oi.quantity) AS items
FROM Orders o
INNER JOIN OrderItems oi ON oi.order_id = o.id
INNER JOIN Customers c ON c.id = o.customer_id
GROUP BY o.id, c.name
SETTINGS max_memory_usage = 8589934592;
Edit config.xml
(server-wide) or users.xml
(per-user) and restart ClickHouse. Increase <max_memory_usage>
or <max_memory_usage_for_user>
to a realistic value based on your hardware.
Enable spill-to-disk before the hard limit is hit. Tune max_bytes_before_external_sort
and max_bytes_before_external_group_by
so ClickHouse offloads intermediate data to disk instead of failing.
Project only needed columns, filter early, use JOIN … ANY
when possible, and pre-aggregate with materialized views. Smaller datasets mean smaller working sets.
Start by optimizing SQL. Then bump max_memory_usage
for heavy analytics, keep spill settings below the hard ceiling, and monitor system.query_log
to catch memory-hungry statements early.
Yes. Append ?max_memory_usage=8589934592
(8 GB) to the connection string to apply the limit to every query executed via that session.
Query system.query_log
where exception_code = 241
. The ProfileEvents
column shows peak memory usage so you can prioritize tuning.
Not directly. Higher limits let queries run instead of failing, but if physical RAM is exhausted the OS may start swapping, which degrades performance. Monitor system memory.