Retrieve the storage footprint of any Snowflake table using metadata views, SHOW commands, or SYSTEM$ESTIMATE_TABLE_SIZE().
Query INFORMATION_SCHEMA.TABLE_STORAGE_METRICS. It updates every few minutes and returns compressed bytes, row count, and time of measurement.
Use DBNAME.INFORMATION_SCHEMA.TABLE_STORAGE_METRICS
. Filter by table_schema
and table_name
for one table or omit filters to list all.
SELECT bytes/1024/1024 AS size_mb, row_count
FROM SHOPDB.INFORMATION_SCHEMA.TABLE_STORAGE_METRICS
WHERE table_schema = 'PUBLIC' AND table_name = 'PRODUCTS';
Call SYSTEM$ESTIMATE_TABLE_SIZE('SHOPDB.PUBLIC.PRODUCTS')
. It returns JSON with compressed and uncompressed byte estimates—helpful before loading more data.
Yes. SHOW TABLES IN SCHEMA SHOPDB.PUBLIC;
returns bytes
and rows
columns. Wrap it with RESULT_SCAN(LAST_QUERY_ID())
for filtering or calculations.
Order TABLE_STORAGE_METRICS by bytes
. Example:SELECT table_name, bytes/1024/1024 AS size_mb
FROM SHOPDB.INFORMATION_SCHEMA.TABLE_STORAGE_METRICS
ORDER BY bytes DESC
LIMIT 10;
TABLE_STORAGE_METRICS updates roughly every 5 minutes. For real-time checks, use SYSTEM$ESTIMATE_TABLE_SIZE()
.
Automate nightly jobs that write table sizes to a monitoring table, alert on sudden spikes, and archive cold data to lower-cost warehouses.
1. Trusting SHOW TABLES alone; it may display stale bytes during heavy loads. Cross-check with TABLE_STORAGE_METRICS.
2. Forgetting to convert bytes to MB/GB; raw bytes are hard to interpret. Always divide by 10242 or 10243.
Use TABLE_STORAGE_METRICS for scheduled reports, SYSTEM$ESTIMATE_TABLE_SIZE for ad-hoc estimates, and SHOW TABLES for a simple overview.
No. The function reads metadata only and has no impact on concurrent DML.
Approximately every 5 minutes, but heavy workloads can delay updates by a few minutes.
Yes. Schedule a daily INSERT into a custom monitoring table selecting from TABLE_STORAGE_METRICS, then chart the results.