The error appears when a query uses more memory than the slot(s) assigned by your Redshift WLM queue.
Redshift allocates a fixed memory slice to every query slot. When the query’s hash or sort operations need more RAM than that slice, the leader aborts the statement and returns the error.
Run EXPLAIN
or SVL_QUERY_SUMMARY
to see steps with high workmem
. Hash joins or sorts on large, unsorted data usually cause the spike.
Yes.Either assign the query to a WLM queue with a larger memory_percent_to_use
or temporarily raise wlm_query_slot_count
in your session.
set wlm_query_slot_count = 3;
triples the memory slice, letting the query use RAM from three slots in that queue.
Replace large hash joins with JOIN
s on dist-key columns, filter early with WHERE
, and order by indexed columns to skip explicit sorts.
Create a separate queue for ETL or ad-hoc analytics, set concurrency
to 1–3, and give it 70–90 % of memory.Keep short BI queries in a high-concurrency, low-memory queue.
1. Analyze tables after large loads.
2. Vacuum regularly to sort and encode data.
3. Keep DIST style keys aligned.
4. Monitor STL_WLM_QUERY
for recurring memory spills.
1. Identify failing query.
2. Check STL_ALERT_EVENT_LOG
for memory alerts.
3. Review plan.
4. Rewrite or raise slots.
5.Long-term: redesign table distribution or WLM.
If optimized queries still exceed memory and lowering concurrency slows workloads, scaling node count or class is the sustainable fix.
.
Yes, because fewer slots remain for parallel statements. Schedule heavy queries during off-peak hours or isolate them in a low-concurrency queue.
Use Amazon Redshift Console metrics or query STL_WLM_QUERY
and SVL_QUERY_REPORT
to view max_workmem
after the query completes.
No. The message refers to RAM allocated to the query slot, not disk. However, insufficient disk can surface as "disk full" errors, not "max memory exceeded".