Quickly list every user, role, and session touching your Snowflake-linked PostgreSQL instance.
Teams often replicate Snowflake data into PostgreSQL for downstream services. "Who uses Snowflake" usually translates to which Postgres users, roles, or applications are querying Snowflake-sourced tables.
Run \du
in psql
or query pg_catalog.pg_user
. This shows every role—even service accounts—capable of accessing Snowflake-backed schemas.
Join pg_stat_activity
with catalog metadata. Filter by schema or table names synced from Snowflake. The example below targets the snowflake
schema.
Aggregate pg_stat_statements
to learn which business entities the Postgres layer pulls most. This helps size Snowflake credits and cache hot data locally.
Enable pg_stat_statements
, tag application names via the connection string, and create a scheduled report that emails a usage dashboard to the data team.
Don’t forget to exclude idle sessions; they skew counts. Also ensure you’re inspecting the correct replica schema; many shops use multiple Snowflake imports.
Yes, you need pg_stat_statements
to aggregate query text, counts, and timing. Enable it in shared_preload_libraries
and restart the server.
Listing roles is instant. Querying pg_stat_activity
is lightweight. Aggregating pg_stat_statements
can be slower on very busy systems; run it off-peak or with LIMIT.
Create a materialized view of the example queries and schedule REFRESH MATERIALIZED VIEW
via cron or a Postgres job runner like pg_cron.