Unit testing in BigQuery isolates and validates SQL logic by comparing actual query results to expected outcomes directly inside the warehouse.
Unit testing in BigQuery runs small, deterministic SQL snippets that verify one transformation or calculation at a time. By checking actual results against expected rows inside the warehouse, you catch logic errors before dashboards break.
Tests prevent regressions, document intent, and speed debugging when schemas or business rules change. They are essential for teams sharing queries or powering revenue-critical reports.
Use native BigQuery scripting (DECLARE, CREATE TEMP TABLE, ASSERT) or wrappers like Dataform, dbt, and bqunit. Native scripts give full control; frameworks add orchestration and reporting.
1) Create a TEMP TABLE expected
with hard-coded rows. 2) Build a TEMP TABLE actual
using your production SQL. 3) Compare with EXCEPT DISTINCT
or ASSERT
. 4) Return zero rows or raise an error when mismatches exist.
The example below ensures each Orders.total_amount
equals the sum of its OrderItems
. A non-zero mismatch count triggers an ASSERT failure.
Store each test in a file, then run bq query --use_legacy_sql=false --script
in CI (Cloud Build, GitHub Actions). Fail the pipeline when any script exits non-zero.
Keep fixtures tiny, cover edge cases, name tests after rules, clean up temp objects, and run tests locally before committing.
Yes. Build the actual
temp table with SELECT * FROM my_view
. Views materialize during the test without affecting production.
Filter fixture queries to a small date range and set _PARTITIONTIME
explicitly when inserting expected rows.
The script aborts, BigQuery returns a non-zero exit code, and CI marks the job as failed. The error message contains the description you supplied.