Event triggers in BigQuery route audit-log events to Pub/Sub, Eventarc, or Cloud Functions so you can react to table or dataset changes in real time.
BigQuery itself does not run procedural triggers, but every job (INSERT, UPDATE, LOAD) emits Cloud Audit Logs. By creating a log sink and wiring it to Pub/Sub or Eventarc, you get an "event trigger" that invokes Cloud Functions or Cloud Run whenever a table changes.
Triggers update dashboards instantly, restock products when Orders
spike, or email customers after an Orders
insert—all without polling.
1️⃣ Create a Pub/Sub topic.
2️⃣ Add a log sink that filters BigQuery tabledata.insertAll
events and routes them to the topic.
3️⃣ Deploy a Cloud Function subscribed to the topic.
resource.type="bigquery_resource" AND
protoPayload.methodName="tabledata.insertAll" AND
resource.labels.table_id="Orders"
exports.ordersInserted = (msg, ctx) => {
const data = Buffer.from(msg.data, 'base64').toString();
const event = JSON.parse(data);
const rowCount = event.protoPayload.serviceData.tableDataChange.rowInsertCount;
console.log(`Inserted ${rowCount} rows into Orders`);
};
Run the example INSERT below. Watch Cloud Functions logs to confirm the message appears.
roles/viewer
for the sink writer; roles/pubsub.subscriber
for the function).Missing IAM roles – The sink service account needs Pub/Sub Publisher
; the Cloud Function needs BigQuery Data Viewer
if it queries tables.
Overly broad filters – Filtering only on resource.type="bigquery_resource"
floods Pub/Sub and raises costs. Always include methodName
and table/dataset labels.
Yes. Change methodName
in the log filter to jobservice.jobcompleted
and inspect the payload for DML_UPDATE
operations.
No. Audit logs are written asynchronously after the job finishes, so query latency is unaffected.
No. Event triggers rely on Cloud Audit Logs, Pub/Sub, and Cloud Functions or Eventarc, not SQL DDL.