Tune memory parameters like work_mem and temp_file_limit to stop PostgreSQL sessions from exceeding the configured RAM limits and aborting.
The planner allocates RAM per sort, hash, or aggregate based on work_mem. When the actual data volume outgrows that budget and temp_file_limit is reached, PostgreSQL stops the query and returns an out-of-memory message similar to “max memory exceeded.”
Run SHOW work_mem;
, SHOW maintenance_work_mem;
, and SHOW temp_file_limit;
. Compare the results with the physical RAM and concurrent session count to spot unrealistic defaults.
Increase work_mem only as high as the server can afford for each active backend. Use ALTER SYSTEM SET work_mem = '64MB';
, then SELECT pg_reload_conf();
. For one-off heavy reports, issue SET LOCAL work_mem = '512MB';
inside the transaction.
Large on-disk sorts inflate base/pgsql_tmp
. Raise temp_file_limit
or refactor the query to process fewer rows at once. Use SET temp_file_limit = '10GB';
for a session-level override.
Yes. If shared_buffers is set too high, the OS page cache shrinks, forcing swap. Keep shared_buffers around 25% of total RAM on dedicated database servers.
Enable log_min_error_statement = ERROR
and log_temp_files = 0
. PostgreSQL will write the failing SQL and the size of every temporary file, helping you pinpoint problem statements.
• Always qualify ORDER BY columns with indexes.
• Use multicolumn indexes to avoid large in-memory sorts.
• Break analytical jobs into smaller batches with LIMIT/OFFSET or window functions.
• Schedule heavy reports during off-peak hours.
Only when the workload is dominated by sorts or hash joins. If queries are CPU-bound or disk-bound, increasing work_mem offers little benefit and can harm overall stability.
Set log_temp_files = 0
to log every temporary file with its size. Tools like pgBadger aggregate these logs so you can identify memory-hungry queries.
Create a ROLE and apply ALTER ROLE developer SET work_mem = '32MB';
. This caps memory for that user without affecting the rest of the system.