EXPLAIN ANALYZE shows the execution plan and runtime statistics, letting you identify slow operations and tune queries.
EXPLAIN ANALYZE parses a SQL statement, executes it, and returns a line-by-line plan with actual timing, row counts, I/O, and memory metrics. Use it to uncover missing indexes, costly joins, or misestimated rows.
Run it whenever a SELECT, UPDATE, or DELETE feels sluggish, after schema changes, or before deploying new features.Pair it with index reviews and statistics refreshes for best results.
Start at the bottom node—total query time—then walk upward. Look for nodes where “Actual Total Time” dominates.High “Rows” vs “Estimated Rows” gaps indicate poor statistics or ineffective indexes.
Use BUFFERS to view shared and temp block hits, TIMING to break down operator times, VERBOSE for column lists, and FORMAT JSON for machine-readable plans consumable by tools like pganalyze.
Convert sequential scans to index scans by creating B-tree, hash, or GIN indexes on filtered columns.Rewrite OR clauses as UNION ALL or use partial indexes for highly selective conditions.
Nested-loop joins hurt on large datasets. Encourage hash or merge joins by indexing join keys, sorting data, or increasing work_mem so hash tables fit in RAM.
Outdated statistics mislead the planner. Run ANALYZE or enable autovacuum.For skewed data, create extended statistics or USE most_common_vals histograms.
Raise work_mem for complex aggregations, shared_buffers for read-heavy workloads, and effective_cache_size to reflect OS cache. Avoid setting random_page_cost too low; it may cause excessive index usage.
.
Yes, on INSERT, UPDATE, or DELETE it executes the statement. Wrap such queries in a transaction and roll back.
Negligibly. Parsing large plans can be heavy in clients, not on the server.