The error appears when a query’s execution memory surpasses the limit Snowflake allocates to the session or warehouse.
Snowflake allocates fixed memory per warehouse cluster. Hash-heavy joins, large intermediate results, or missing partition pruning can surpass that limit and abort the query with this error.
Run SELECT * FROM TABLE(SYSTEM$MEMORY_USAGE())
in a separate tab. The view reveals current and peak memory for each running query, helping you spot spikes early.
Yes. Scale the warehouse to the next size: ALTER WAREHOUSE dev_wh SET WAREHOUSE_SIZE = LARGE;
or add clusters for multi-cluster warehouses. Both actions grant more memory instantly.
Cartesian joins, CROSS JOIN
without filters, selecting all columns from wide tables, and window functions over unbounded partitions create massive intermediates. Rewrite or add filters/partition keys.
Filter early, project only needed columns, and aggregate before joining. Use SELECT customer_id, SUM(total_amount)
grouped in a CTE, then join to cut memory by orders of magnitude.
Yes. If the data is unchanged and the query text matches, Snowflake serves results from cache using almost no memory. Encourage users to rerun identical queries instead of copy-pasting slight variations.
The example below aggregates OrderItems
first, joins on reduced datasets, and runs comfortably inside a SMALL warehouse.
Yes. Each size increment roughly doubles the compute nodes and allotted memory.
Only if the query text and underlying data are unchanged. New or modified data forces re-execution and requires memory again.
Creating a TEMP table for pre-aggregated results can slash runtime memory, especially when reused across multiple steps.