ELT (Extract, Load, Transform)

Galaxy Glossary

What is ELT and how does it differ from ETL?

ELT is a modern data integration pattern where raw data is first Extracted from source systems, Loaded into a destination datastore, and then Transformed in-situ using that platform’s processing engine.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Description

Understanding ELT

ELT—Extract, Load, Transform—reverses the final two steps of the classic ETL pattern. Instead of transforming data in an intermediary system before loading, ELT lands raw data directly into a scalable analytics store (cloud data warehouse, lakehouse, or object storage) and performs transformations there, leveraging elastic compute and native SQL engines.

Why ELT Became the Default for Modern Analytics

Cloud data warehouses such as Snowflake, BigQuery, Redshift, and Databricks have fundamentally changed cost and performance dynamics. It is now cheaper and faster to copy large volumes of raw data into a centralized platform and run set-based SQL transformations on-demand than to operate a separate transformation layer. ELT embraces this shift by:

  • Reducing time-to-data: analysts can query newly ingested tables minutes after extraction without waiting for long pre-processing pipelines.
  • Enabling schema-on-read: raw data is preserved; teams can retroactively model new dimensions without re-ingesting.
  • Lowering operational complexity: fewer moving pieces, simpler dependency graphs, and centralized monitoring.
  • Leveraging warehouse governance: access control, lineage, and auditing live with the data.

Core Steps of an ELT Workflow

1. Extract

Data is pulled—or streamed—from source systems: operational databases, SaaS APIs, logs, IoT devices, etc. Tools like Fivetran, Airbyte, and Debezium automate snapshot and change data capture (CDC), writing results to cloud storage or staging schemas.

2. Load

The extracted files or CDC events are loaded almost verbatim into landing zones inside the target warehouse. Typical formats include Parquet, Avro, JSON, or compressed CSV. The goal is lossless ingest: preserve every column, row, and type—even if messy.

3. Transform

Once data is safely stored, SQL (or Spark/Python) jobs perform cleansing, normalization, denormalization, data quality checks, and dimensional modeling. Frameworks such as dbt orchestrate these transformations, recompiling views into materialized tables.

ELT vs. ETL: A Direct Comparison

  • Compute Resource: ETL relies on external engines (Informatica, SSIS) whereas ELT leverages the warehouse’s compute.
  • Latency: ELT offers lower latency because loading is lightweight; ETL can bottleneck on transformation before load.
  • Flexibility: ELT keeps raw history, supporting late-arriving use cases; ETL often discards intermediate detail.
  • Cost Model: ELT pays for transformations only when queries run; ETL pays for continuously running middleware.
  • Governance Risk: Bad transformations in ETL can permanently lose data; ELT always allows a replay from raw.

Best Practices for Successful ELT

Land Raw, Immutable Data

Write extracted data into _raw schemas that are append-only. Disable UPDATE/DELETE permissions so the raw history remains an authoritative source.

Adopt Layered Modeling

Organize transformations into clearly named layers—commonly staging, intermediate, and mart. Each layer should be idempotent and fully derived from the one below it. Tools like dbt or Dagster enforce these dependencies.

Use Declarative Transformations

Write transformations in SQL as views or CREATE OR REPLACE TABLE AS SELECT (CTAS). Declarative code is easier to diff, test, and review than imperative procedural scripts.

Implement Data Quality Gates

Automated tests (row counts, uniqueness, null checks, schema contracts) should run after each transformation step. Fail fast to prevent bad data from propagating.

Monitor Lineage and Cost

Track downstream dependencies and warehouse spend. ELT centralizes compute in the warehouse, so poorly written queries can cascade costs. Use query tags, warehouse credit limits, and optimization techniques (clustering, partitioning, materialized views).

Practical ELT Example

Assume a SaaS product’s PostgreSQL users table is replicated daily into a Snowflake schema raw.public_users. We want a cleaned dimension table dim_users.

-- Stage: cast types, alias snake_case
CREATE OR REPLACE TABLE staging.users AS
SELECT
id::NUMBER AS user_id,
full_name AS name,
email AS email,
to_timestamp_tz(created) AS created_ts,
to_timestamp_tz(updated) AS updated_ts,
status AS status
FROM raw.public_users;

-- Mart: keep only active users & apply SCD Type 2
CREATE OR REPLACE TABLE mart.dim_users AS
SELECT *
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY updated_ts DESC) AS rn
FROM staging.users
WHERE status = 'active'
) WHERE rn = 1;

Both statements run inside Snowflake without moving data outside the platform—embodying the ELT philosophy.

Where Galaxy Fits into ELT

Because ELT transformations are overwhelmingly SQL-centric, a developer-focused SQL editor like Galaxy accelerates the workflow:

  • Context-aware AI Copilot writes or refactors transformation queries when schemas evolve.
  • Collections let analytics engineers share and endorse staging and mart queries, ensuring team alignment.
  • Run History & Diff provides visibility into how a model has changed over time—critical for debugging data quality regressions.
  • Desktop Performance enables iterative query development on large warehouse datasets without browser sluggishness.

Common Mistakes and How to Avoid Them

1. Treating ELT as “No Transform”

The mistake: copying raw data into a warehouse but never modeling it.
Fix: schedule incremental transformations and enforce SLAs so raw data matures into consumable datasets.

2. Running Row-by-Row Procedures

The mistake: porting imperative ETL scripts that update rows individually, negating warehouse parallelism.
Fix: rewrite logic as set-based SQL (e.g., CTAS, MERGE) and leverage warehouse optimizer hints.

3. Ignoring Cost Governance

The mistake: giving ad-hoc users unrestricted warehouse credits, leading to runaway bills.
Fix: tag queries, allocate resource monitors, and educate teams on partition pruning.

Actionable Checklist

  • [ ] Use a managed extractor (Fivetran, Airbyte) to land raw data daily.
  • [ ] Create immutable _raw schemas with RBAC locking.
  • [ ] Model layered views (stg_, int_, dim_, fct_).
  • [ ] Implement dbt tests for duplicates, nulls, and freshness.
  • [ ] Review warehouse spend monthly; optimize heavy queries.
  • [ ] Document models in Galaxy Collections and endorse production queries.

Further Reading

Why ELT (Extract, Load, Transform) is important

As data volumes soar and cloud warehouses provide near-infinite compute, ELT enables teams to centralize raw data quickly, preserve history, and perform flexible, cost-efficient SQL transformations in situ. This accelerates analytics, improves governance, and aligns with DevOps-style version control and CI/CD for data models.

ELT (Extract, Load, Transform) Example Usage


SELECT COUNT(*) FROM stg_orders WHERE order_date >= CURRENT_DATE - INTERVAL '1 day';

Common Mistakes

Frequently Asked Questions (FAQs)

Is ELT always better than ETL?

No. ELT shines when your destination platform can scale compute cheaply (e.g., Snowflake). For strict latency or on-prem constraints, traditional ETL may still fit.

How does ELT handle slowly changing dimensions?

Transformations implement SCD logic (Type 2 or 3) using SQL MERGE or window functions once data is loaded, preserving full history.

What tools implement ELT?

Popular stacks combine Fivetran/Airbyte (extract), Snowflake/BigQuery/Databricks (load), and dbt or Dagster (transform).

How can I use Galaxy in an ELT workflow?

Galaxy’s desktop SQL editor and AI Copilot let you write, test, share, and version the transformation queries that form the “T” in ELT, while Collections help endorse production-ready models.

Want to learn about other SQL terms?