RETURN the names and metadata for all tables within a BigQuery dataset.
Run a simple SELECT against the INFORMATION_SCHEMA.TABLES view or use the bq ls CLI command. Both return table names instantly without scanning data.
Use SELECT table_name FROM `project_id.dataset_id.INFORMATION_SCHEMA.TABLES` WHERE table_type = 'BASE TABLE';. Replace project_id and dataset_id with your own values.
Yes. Omit the table_type filter or adjust it to 'VIEW' or 'MATERIALIZED VIEW' to retrieve those object types.
Add row_count and size_bytes columns from INFORMATION_SCHEMA.TABLE_STORAGE. Join them on table_name for richer metadata without full table scans.
SQL is portable and works inside workflows, while bq ls is convenient in terminals and scripts. INFORMATION_SCHEMA also lets you filter, sort, or join for advanced reporting.
Always qualify tables with project and dataset (project.dataset.table). This avoids ambiguity and cross-region errors, especially in multi-project environments.
The query below lists all base tables in ecommerce dataset, then joins storage metadata to display row counts and sizes.
WITH tables AS (
SELECT table_name
FROM `myproj.ecommerce.INFORMATION_SCHEMA.TABLES`
WHERE table_type = 'BASE TABLE'
)
SELECT t.table_name,
s.row_count,
ROUND(s.size_bytes/1024/1024, 2) AS size_mb
FROM tables t
JOIN `myproj.ecommerce.__TABLES__` s
ON t.table_name = s.table_id
ORDER BY size_mb DESC;
bq ls --project_id=myproj --format=prettyjson ecommerce
The --format flag outputs machine-readable JSON for automation.
No. INFORMATION_SCHEMA and bq ls read metadata, not data, so the operation is free.
Yes. Query region-level INFORMATION_SCHEMA e.g., `region-us.INFORMATION_SCHEMA.SCHEMATA` first, then loop or join to gather tables.
Include the creation_time column: `WHERE DATETIME(table_creation_time) > '2024-01-01'` to list newer tables only.