Tuning PostgreSQL memory parameters eliminates “max memory exceeded”‐style errors during large queries or loads.
Large sorts, hashes, or parallel aggregations can exhaust work_mem
; many sessions can overrun shared_buffers
; huge temp files can hit temp_file_limit
. When Linux OOM-kills the backend, the symptom mirrors Oracle’s “max memory exceeded.”
work_mem
handles per-sort/hash memory, maintenance_work_mem
covers VACUUM/CREATE INDEX, shared_buffers
manages cache, and temp_file_limit
constrains spill size. Adjusting them prevents failures.
Run SHOW work_mem;
or query pg_settings
.Compare the values to available RAM and workload concurrency.
work_mem
safely?Raise it for a single session with SET work_mem = '128MB';
. Multiply the value by expected parallel workers to estimate total usage before changing it globally.
ALTER SYSTEM SET work_mem = '64MB'; SELECT pg_reload_conf();
applies after reload and affects all new sessions.
Set temp_file_limit
so a single query cannot fill the disk.Example: ALTER SYSTEM SET temp_file_limit = '5GB';
shared_buffers
?Increase it (25-40% of RAM) if cache hit ratio is low. Monitor in pg_stat_database
before and after.
pg_stat_activity
.work_mem
only to reporting roles.Use pg_stat_kcache
or pgBouncer
stats to watch per-query memory before the OS intervenes.
.
Extra RAM helps, but PostgreSQL still needs correct work_mem
and shared_buffers
settings to use it efficiently.
Yes. Create a role-specific config with ALTER ROLE analyst SET work_mem = '256MB';
to isolate heavy reports.
Enable log_temp_files = 0
. PostgreSQL logs every file with size, letting you tune the worst offenders.