Identify, analyze, and optimize slow PostgreSQL queries with logging, statistics, and EXPLAIN.
pg_stat_statements<\/code> extension, then query it to rank statements by total or mean execution time.This avoids digging through log files.<\/p>\n\n
Query example<\/h3>\n
SELECT query, total_exec_time, mean_exec_time\nFROM pg_stat_statements\nORDER BY mean_exec_time DESC\nLIMIT 10;<\/code><\/p>\n\n
When should I use EXPLAIN ANALYZE?<\/h2>\n
Use EXPLAIN ANALYZE<\/code> after spotting a slow statement to see the actual execution plan, row counts, and timing for each step.<\/p>\n\n
EXPLAIN output tip<\/h3>\n
Focus on nodes with high actual time\/rows<\/em> or on Sequential Scans where Index Scans are expected.<\/p>\n\n
Set log_min_duration_statement = 500<\/code> (ms) and reload.PostgreSQL will now log queries taking longer than 0.5 s without flooding logs.<\/p>\n\n
Which parameters speed up diagnosis?<\/h2>\n
Enable auto_explain.log_min_duration = 500<\/code>, auto_explain.log_analyze = on<\/code>, and auto_explain.log_buffers = on<\/code> for automatic plan capture.<\/p>\n\n
What quick fixes usually help?<\/h2>\nCreate missing indexes, rewrite correlated sub-queries as JOINs, add WHERE filters earlier, and run VACUUM ANALYZE<\/code> to refresh statistics.<\/p>.
The extension adds negligible overhead (≈1–2%) and is safe for production when track_activity_query_size is large enough.
Yes. Run SELECT pg_stat_statements_reset();
to zero out counters and start fresh.