The error appears when a query's memory demand exceeds the session-level work_mem limit or the server runs out of available RAM.
PostgreSQL raises this message when a single backend needs more memory for sorts, hashes, or aggregates than allowed by work_mem. If multiple sessions expand simultaneously, the OS may also run out of RAM, forcing PostgreSQL to abort the query.
Enable log_min_error_statement = error
and log_statement_stats = on
, then re-run the workload. PostgreSQL logs the offending SQL along with memory usage statistics, letting you focus tuning efforts on the right statement.
work_mem
limits memory per sort or hash operation. maintenance_work_mem
governs CREATE INDEX, VACUUM, and CLUSTER. Increasing either eases memory pressure but must match server RAM.
Calculate: (max_connections × work_mem) + shared_buffers
should not exceed 70–80% of physical RAM. Raise work_mem gradually, monitor pg_stat_database.stats_reset
, and watch top
or pg_stat_activity
for spikes.
Yes. Use SET LOCAL work_mem = '256MB'
inside a transaction or create a reporting role with ALTER ROLE analyst SET work_mem = '256MB';
so standard OLTP sessions remain lean.
Break large aggregations into smaller steps, add indexes to avoid full sorts, and filter early. In ecommerce, pre-aggregate daily sales, then join to Customers for reporting instead of aggregating raw OrderItems each time.
Enable track_io_timing
and query pg_stat_kcache
(extension) or use pg_top
. Alert when total resident memory nears 90% of RAM or when a single backend exceeds expected limits.
No. shared_buffers caches table pages. The error relates to per-query work_mem.
Yes. Setting temp_file_limit
aborts queries that spill excessive temp files, stopping runaway disk usage when memory is exhausted.
Run SHOW work_mem;
. To view role or system defaults, query pg_settings
.