Use INFORMATION_SCHEMA views or __TABLES__ meta-tables to find the on-disk size of a BigQuery table and its partitions.
Run a SELECT against region-us.INFORMATION_SCHEMA.TABLE_STORAGE
or the legacy __TABLES__
meta-table. Both return bytes stored without scanning the table, so the query is free.
Standard SQL: PROJECT_ID.DATASET.INFORMATION_SCHEMA.TABLE_STORAGE
and TABLE_OPTIONS
. Legacy SQL: [PROJECT:DATASET.__TABLES__]
. Each row includes total_rows
, active_storage_bytes
, and long_term_storage_bytes
.
Filter by table_name
in INFORMATION_SCHEMA, or by table_id
in __TABLES__
. Include partition_id
when the table is partitioned to see per-partition footprints.
Order by active_storage_bytes
descending and limit the result set. This helps prioritize cleanup or clustering work.
No. Query scan size equals columns read × filtered partitions. Table size is the full stored data. Use size stats to forecast storage cost, not query cost.
Partition by date or integer ranges, cluster by commonly filtered columns, and configure partition_expiration_days
. Regularly archive old partitions to long-term storage or delete them.
Yes. Metadata queries don’t incur data-processing charges.
Storage stats are eventually consistent and usually update within a few minutes of data changes.
Yes. Click the table in the left panel; size appears under Details. The SQL methods are better for automation.