The error occurs when a query’s working data exceeds the RAM BigQuery allocates for the slot reservation, forcing the job to abort.
BigQuery allocates memory per slot. When joins, aggregations, or large result sets need more RAM than those slots supply, the job stops and raises the error.
Check the Query Plan tab. Look for stages marked “Repartition” or “Hash Join” with high shuffle or peak memory. These stages usually exhaust memory.
Use explicit column lists instead of *
.Less data flows to later stages, lowering memory.
Push WHERE
clauses or use sub-queries to shrink tables before expensive joins.
Materialize intermediate results into CREATE TEMP TABLE
.Each step gets fresh memory limits.
Store Orders
partitioned by order_date
and clustered by customer_id
so scans stay small.
APPROX_COUNT_DISTINCT
and friends use sketches that require far less memory than exact counts.
Force broadcast joins with JOIN EACH
or /*+ BROADCAST_JOIN */
when one side is tiny.
BigQuery offers table decorators, sub-query filters, and join hints that directly reduce memory usage.See syntax below.
If the query is already optimized, consider flex slots or reservation autoscale. More slots equal more aggregate RAM.
• Select only needed columns
• Filter early and aggressively
• Aggregate after reducing rows
• Prefer partitioned, clustered storage
• Break monolith queries
• Use approximate aggregations
• Monitor stages in Query Plan
.
No. Poorly written queries can still exceed memory. Optimize first, then scale slots.
No. Temp tables use the same on-demand storage pricing and are dropped automatically at job end.
Yes. In the Job Details pane, expand a stage and check “Peak memory”. Compare against slot reservations.