Identify which user principals are running jobs in your BigQuery project by querying audit and INFORMATION_SCHEMA views.
Knowing the active user list helps control costs, tighten security, and allocate training resources.
Job-level metadata lives in <region>.INFORMATION_SCHEMA.JOBS_BY_*
views and Cloud Audit Logs. These views expose principal_email
for every job.
Filter JOBS_BY_ORGANIZATION
on creation_time
and aggregate by principal_email
. Limit the window to avoid scanning an excessive partition.
SELECT principal_email,
COUNT(*) AS job_count,
SUM(total_bytes_processed/1e12) AS tb_scanned
FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
WHERE creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)
AND job_type = 'QUERY'
GROUP BY principal_email
ORDER BY job_count DESC;
Replace region-us
with your datasets region to avoid cross-region charges.
Join JOBS_BY_PROJECT
to JOBS_TIMELINE_BY_PROJECT
or parse the referenced_tables
array to break down activity by table.
Schedule a query that compares todays user list against a reference table and pushes a Pub/Sub message when a new email appears.
Partition by creation_time
, leverage WITH CONNECTION
scheduled queries, and store snapshots in a low-cost tier for audit purposes.
Querying the wrong region or scanning entire history dramatically raises cost. Forgetting to include service accounts skews usage totals.
Job metadata is retained for 180 days by default. Export audit logs if you need longer history.
No. INFORMATION_SCHEMA views are metadata tables and cost $0 to scan; only bytes read outside those views incur charges.
Yes. Aggregate on project_id
, dataset_id
, or parse referenced_tables
to see dataset-level traffic.