Cost-optimized Snowflake storage tiering is the practice of strategically placing data across Snowflake’s permanent, transient, temporary, and external storage options—while tuning retention features like Time Travel and Fail-safe—to minimize monthly storage spend without sacrificing compliance or performance.
Optimize Snowflake storage costs without hurting performance.
Learn how to choose the right table type, retention window, and external stage so you only pay Snowflake for the data that truly needs to live there.
Snowflake makes storage simple—drop a file into a stage or create a table and you are instantly querying. Simplicity, however, can hide waste. By default Snowflake stores every table as permanent data with 1 day of Time Travel and 7 days of Fail-safe. That full-price, fully protected tier is perfect for production reference tables but expensive for scratchpads, intermediate results, or archival data you rarely touch.
Storage tiering is the deliberate act of moving those different data classes to cheaper Snowflake constructs—transient, temporary, or external—and tuning retention so you only pay for durability you need.
Snowflake bills storage monthly at a flat rate per terabyte. Even with automatic compression, costs add up:
According to Snowflake’s own benchmarks, customers who switch appropriate data sets to transient tables and minimize retention can cut storage bills 30-60 %. That margin often funds additional warehouses—or your AI copilot in Galaxy.
Default tier with Time Travel (1–90 days) and 7-day Fail-safe. Best for mission-critical, auditable data.
No Fail-safe; optional Time Travel up to 1 day. Great for reproducible but non-critical datasets—think daily ETL outputs.
Session-scoped, auto-dropped on disconnect. Zero Fail-safe and Time Travel. Ideal for scratch or staging work inside a single query session.
Metadata in Snowflake, files live cheaply in object storage (S3, GCS, Azure). Pay Snowflake only for metadata and the compute you use to query.
Bring-your-own parquet/Iceberg files while enjoying Snowflake’s query engine. A future powerhouse for cold archives.
Ask for every dataset: "Do I need Fail-safe? Do I need more than 1 day of Time Travel?" If not, choose TRANSIENT
or TEMPORARY
.
Set the shortest possible window via DATA_RETENTION_TIME_IN_DAYS
. For transient tables you can even disable it (0
).
Automate ALTER TABLE ... DROP PARTITION
or use stream-triggered tasks to purge historical micro-partitions that are older than compliance requires.
Move cold data older than X months to parquet on S3 and replace the original table with an EXTERNAL TABLE
. You retain queryability through Snowflake at a fraction of the storage price.
Snowflake’s automatic compression is excellent, but nested JSON and CSV compress poorly. Converting raw logs to VARIANT
or parquet before loading can halve your footprint.
ACCOUNT_USAGE.STORAGE_USAGE
to validate savings.You keep two years of clickstream in a permanent table with 30-day Time Travel. Only the last 90 days are queried regularly. Goal: shrink cost by reallocating the least-used 21 months to cheaper tiers.
EXTERNAL TABLE
on that location.VIEW
UNION ALL
The hot internal table shrinks from 50 TB to 6 TB—annual savings: ≈$14 K at current list price.
While storage tiering happens in Snowflake, the heavy-lifting SQL lives in your editor. Galaxy’s AI copilot auto-suggests CREATE TRANSIENT TABLE
when it detects staging patterns, warns when you forget to set DATA_RETENTION_TIME_IN_DAYS = 0
, and helps generate migration scripts. Sharing those scripts in a Galaxy Collection lets your entire team standardize on cost-efficient patterns without pasting SQL into Slack.
Why wrong: You pay for every micro-partition snapshot.
Fix: Explicitly set DATA_RETENTION_TIME_IN_DAYS = 1
(or 0) on non-critical tables.
Why wrong: Fail-safe stores a second full copy for seven days.
Fix: Declare staging models as TRANSIENT
in dbt or DDL.
Why wrong: Transient still costs money if you never delete.
Fix: Schedule a Snowflake TASK
to drop or truncate by date.
TRANSIENT
unless compliance forbids.ACCOUNT_USAGE.STORAGE_USAGE
monthly.Snowflake’s elastic compute often overshadows its storage bill—until Finance calls. Implementing a thoughtful tiering strategy of permanent, transient, temporary, and external tables can cut costs in half with no performance hit for everyday analytics. Use Galaxy’s AI-enhanced SQL editor to codify and share those best practices, and you’ll never pay for bytes you don’t need.
Storage is the only Snowflake resource that bills 24 × 7; unoptimized tables silently inflate costs. Tiering ensures you pay premium rates only for mission-critical data, freeing budget for compute, AI features, or other strategic investments.
No. Snowflake offers one storage class but different table types with varying retention guarantees. You emulate tiering by combining those types with external stages.
Many regulations require point-in-time recovery but not necessarily a seven-day window. Validate SLAs and set Time Travel retention or backups accordingly before switching to transient.
Query SNOWFLAKE.ACCOUNT_USAGE.STORAGE_USAGE
or use the Cost Usage Dashboard. Track bytes per table before and after migration.
Yes. Galaxy’s AI copilot flags costly DDL patterns, suggests transient alternatives, and lets teams share standardized migration scripts through Collections.