Shows how to inspect memory consumption of BigQuery jobs using INFORMATION_SCHEMA views and job statistics.
High memory consumption can increase slot contention, slow queries, and raise costs. Monitoring usage helps tune SQL, set proper limits, and avoid job failures.
Use INFORMATION_SCHEMA.JOBS* views. The total_slot_ms
and reservation_id
columns reveal how many slot-milliseconds a job consumed, a close proxy for memory.
Query region-us.INFORMATION_SCHEMA.JOBS_BY_PROJECT
filtering by creation_time
and state = 'DONE'
.Join to your job IDs or user names for context.
Yes. Call EXPLAIN ANALYZE
on your SQL. The returned JSON shard shows per-stage slotMs
values to pinpoint heavy operators.
Set maximum_bytes_billed
in the query job configuration or use #legacySQL & max_bytes_billed
hint. Although it caps bytes scanned, it indirectly controls memory.
If workloads regularly exceed memory, create a slot reservation and assign projects through reservation_id
. This isolates resources and stabilizes performance.
.
No. Memory cost is bundled into slot pricing. High memory jobs consume more slot-milliseconds, indirectly increasing cost.
BigQuery lacks a hard memory cap, but you can limit bytes scanned or use reservations to isolate workloads.
The metadata is near real-time—typically within seconds after a job finishes.