Event triggers in Snowflake rely on Streams + Tasks to run SQL automatically when new or changed data appears.
Event triggers are a Streams + Tasks pattern that fires SQL when new rows land or existing rows change. A Stream tracks the change data capture (CDC) for a table, and a Task executes a statement whenever the Stream reports pending rows.
Event triggers remove manual polling, drive near-real-time pipelines, and only consume warehouse credits when work exists, saving money and complexity.
Create a Stream on Orders
and a Task that checks system$stream_has_data()
. When new orders arrive, the Task inserts summary rows or calls a stored procedure.
CREATE OR REPLACE STREAM orders_stream ON TABLE Orders;
CREATE OR REPLACE TASK process_new_orders
WAREHOUSE = analytics_wh
SCHEDULE = '1 MINUTE'
WHEN system$stream_has_data('orders_stream')
AS
CALL sp_process_orders();
Use ALTER TASK task_name RESUME;
to start and SUSPEND;
to stop. Suspended tasks keep their definition but won’t consume credits.
Query SNOWFLAKE.ACCOUNT_USAGE.TASK_HISTORY
or SHOW TASKS;
for last run, status, and errors. Combine with QUERY_HISTORY
to view the SQL that executed.
Grant only the minimum privileges needed to the Task owner to follow least-privilege principles.
Design downstream SQL so reruns don’t duplicate data—e.g., use MERGE
instead of INSERT
.
Include ON_ERROR = 'CONTINUE'
or explicit TRY/CATCH blocks inside procedures to prevent infinite failure loops.
Yes. Place CALL my_proc();
in the task body to run multi-step logic.
Run ALTER TASK task_name SUSPEND;
. The definition stays intact, and history remains queryable.
The platform retries failed runs, so design your SQL to be idempotent (e.g., use MERGE) to avoid duplicates.