BigQuery lets you enable or bypass its query-result cache, speeding up or forcing recomputation of repeated queries.
BigQuery automatically stores the output of a successful, non-DDL/DML query for 24 hours. When the same logical query runs again and the underlying data remains unchanged, BigQuery returns the cached result instantly and at no cost.
Disable caching when you need fresh metrics after a table update, validate performance changes, or benchmark costs. Routine dashboards and ad-hoc analysis should usually keep caching on to save money.
Use --use_cache=false
. This flag sits between the bq query
verb and your SQL text.
bq query --use_cache=false "SELECT customer_id, COUNT(*) AS order_count FROM Orders GROUP BY customer_id"
In the Query Settings pane, uncheck “Use cached results” before running your SQL.
Set the job configuration field useQueryCache
to false
. For example, in Python: job_config.use_query_cache = False
.
Keep caching enabled in production dashboards. Tag benchmarking queries with --use_cache=false
to avoid skewed timing. Document cache settings in shared scripts so collaborators know what to expect.
Running /* random comment */ SELECT...
and assuming it reuses cache—textual differences, even comments, break cache hits. Also, updating a table invalidates only queries that reference the changed partitions; global cache flushes are unnecessary.
Yes. Without cached results BigQuery scans storage again, so you pay for bytes processed. Use it sparingly.
24 hours from the time the original job completes, unless the referenced tables change.
No dedicated command exists. Editing the underlying table or waiting 24 hours naturally invalidates cached data.