Adjusting work_mem reduces temporary disk I/O during large sorts and hash operations, improving query latency and keeping storage from becoming a bottleneck on busy databases.
work_mem is a per-operation memory cap used by PostgreSQL for internal Sort and Hash nodes, so a single query can consume multiple allocations at once, with parallel workers further multiplying memory demand. When an operation exceeds the limit, it spills to temporary files, which typically shows up as temp read and temp written buffers or a Disk value in EXPLAIN (ANALYZE, BUFFERS) output, and hash operations may be further scaled by hash_mem_multiplier on newer releases.
A higher value can be a win for specific heavy queries but becomes dangerous when applied globally without a concurrency budget. Changes should be validated on representative workload with EXPLAIN (ANALYZE, BUFFERS) and monitored after deployment to confirm temp file writes drop without creating memory pressure.
Related: How to optimize PostgreSQL performance \\
Related: How to explain a query plan in PostgreSQL
$ sudo -u postgres psql -Atc "SHOW work_mem;" 4MB
$ sudo -u postgres psql -d appdb -c "SET work_mem = '64kB'; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders ORDER BY total;"
SET
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Sort (cost=17560.39..17799.09 rows=95480 width=34) (actual time=53.131..67.445 rows=100000 loops=1)
Sort Key: total
Sort Method: external merge Disk: 4856kB
Buffers: shared hit=3 read=875, temp read=1808 written=1964
-> Seq Scan on orders (cost=0.00..1829.80 rows=95480 width=34) (actual time=0.191..8.490 rows=100000 loops=1)
Buffers: shared read=875
Planning:
Buffers: shared hit=107 read=22
Planning Time: 0.281 ms
Execution Time: 70.728 ms
(10 rows)
Disk spills typically appear as Sort Method: external merge Disk: … or as non-zero temp read and temp written buffers on Sort/Hash nodes.
$ sudo -u postgres psql -d appdb -c "SET work_mem = '64MB'; SHOW work_mem;" SET work_mem ---------- 64MB (1 row)
Session tuning validates impact without changing instance defaults.
$ sudo -u postgres psql -d appdb -c "SET work_mem = '64MB'; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders ORDER BY total;"
SET
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Sort (cost=9727.39..9966.09 rows=95480 width=34) (actual time=32.833..38.033 rows=100000 loops=1)
Sort Key: total
Sort Method: quicksort Memory: 9323kB
Buffers: shared hit=878
-> Seq Scan on orders (cost=0.00..1829.80 rows=95480 width=34) (actual time=0.026..5.924 rows=100000 loops=1)
Buffers: shared hit=875
Planning:
Buffers: shared hit=129
Planning Time: 0.170 ms
Execution Time: 41.197 ms
(10 rows)
$ sudo -u postgres psql -c "ALTER SYSTEM SET work_mem = '16MB';" ALTER SYSTEM
work_mem applies per Sort/Hash node and can be allocated multiple times per query, so global increases can cause large memory spikes under high concurrency.
Scoped defaults: ALTER ROLE appuser SET work_mem = '64MB';
ALTER DATABASE appdb SET work_mem = '64MB';
$ sudo -u postgres psql -Atc "SELECT pg_reload_conf();" t
systemd based hosts can reload with sudo systemctl reload postgresql when a service unit is available.
Over-allocating work_mem can exhaust RAM and trigger severe swapping or an OOM kill during peak load.
$ sudo -u postgres psql -Atc "SHOW work_mem;" 16MB
$ sudo -u postgres psql -d appdb -c "SELECT datname, temp_files, pg_size_pretty(temp_bytes) AS temp_written, stats_reset FROM pg_stat_database WHERE datname = current_database();" datname | temp_files | temp_written | stats_reset ---------+------------+--------------+------------- appdb | 2 | 6223 kB | (1 row)