Planner cost settings determine how PostgreSQL weighs I/O, CPU, caching, and parallelism when choosing a query plan, so realistic values help the optimizer prefer plans that match the actual hardware and workload.
The planner compares alternative strategies (sequential scans, index scans, joins, sorts, parallel plans) using table statistics plus cost constants such as random_page_cost for non-sequential reads, cpu_tuple_cost for row processing, and parallel_setup_cost for parallel overhead. These constants are not milliseconds, but relative weights used to rank plans against each other.
Cost tuning is sensitive to storage latency, cache size, and query mix, and aggressive changes can shift plans in ways that speed up one query while slowing others. Measurements from EXPLAIN (ANALYZE, BUFFERS) on representative queries, plus incremental parameter changes, keep results interpretable, and stale statistics should be refreshed before drawing conclusions.
Steps to tune planner cost settings in PostgreSQL:
- Open a psql session using a role permitted to change server settings.
$ psql -U postgres -d appdb psql (16.1) Type "help" for help. appdb=#
Local servers using peer authentication commonly prefer sudo -u postgres psql -d appdb.
- Refresh planner statistics for the test database before capturing plans.
appdb=# ANALYZE; ANALYZE
ANALYZE can consume I/O and CPU on large databases, so schedule it for low-traffic windows when needed.
- Record the current planner cost settings as a baseline.
appdb=# SELECT name, current_setting(name) AS setting FROM (VALUES ('random_page_cost'), ('seq_page_cost'), ('cpu_tuple_cost'), ('cpu_index_tuple_cost'), ('cpu_operator_cost'), ('effective_cache_size'), ('parallel_setup_cost'), ('parallel_tuple_cost') ) AS t(name) ORDER BY name; name | setting ---------------------+--------- cpu_index_tuple_cost | 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 effective_cache_size | 4GB parallel_setup_cost | 1000 parallel_tuple_cost | 0.1 random_page_cost | 4 seq_page_cost | 1 (8 rows) - Capture a baseline plan for a representative query using EXPLAIN (ANALYZE, BUFFERS).
appdb=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50; Limit (cost=0.00..1023.54 rows=50 width=128) (actual time=15.842..15.872 rows=50 loops=1) Buffers: shared hit=3 read=420 -> Sort (cost=0.00..20588.34 rows=1000000 width=128) (actual time=15.840..15.846 rows=50 loops=1) Sort Key: created_at DESC Sort Method: top-N heapsort Memory: 33kB Buffers: shared hit=3 read=420 -> Seq Scan on orders (cost=0.00..18449.00 rows=1000000 width=128) (actual time=0.023..12.901 rows=1000 loops=1) Filter: (customer_id = 42) Rows Removed by Filter: 999000 Buffers: shared hit=3 read=420 Planning Time: 0.230 ms Execution Time: 15.920 msEXPLAIN (ANALYZE) executes the statement, so avoid data-changing SQL or run it inside a transaction that can be rolled back.
Replace the example query with a real workload query to measure meaningful plan shifts.
- Set trial planner cost values for the current session.
appdb=# SET random_page_cost = 1.1; SET appdb=# SET seq_page_cost = 1.0; SET appdb=# SET effective_cache_size = '12GB'; SET appdb=# SET parallel_setup_cost = 200; SET appdb=# SET parallel_tuple_cost = 0.05; SET
random_page_cost reflects the relative penalty for non-sequential reads, so SSD-backed storage often starts around 1.1 to 1.5 while spinning disks frequently stay closer to 3 to 4.
effective_cache_size is an estimate for cached data, not a memory reservation.
parallel_setup_cost and parallel_tuple_cost bias the planner toward or away from parallel plans, so validate parallel changes against the actual query mix.
- Re-run the same EXPLAIN (ANALYZE, BUFFERS) statement to compare plan shape, timing, buffer usage.
appdb=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50; Limit (cost=0.42..55.93 rows=50 width=128) (actual time=0.192..0.248 rows=50 loops=1) Buffers: shared hit=58 read=2 -> Index Scan using orders_customer_id_created_at_idx on orders (cost=0.42..11234.12 rows=10000 width=128) (actual time=0.190..0.240 rows=50 loops=1) Index Cond: (customer_id = 42) Planning Time: 0.320 ms Execution Time: 0.291 ms - Reset the session overrides to return to cluster defaults after testing.
appdb=# RESET random_page_cost; RESET appdb=# RESET seq_page_cost; RESET appdb=# RESET effective_cache_size; RESET appdb=# RESET parallel_setup_cost; RESET appdb=# RESET parallel_tuple_cost; RESET
- Persist the chosen cost values using ALTER SYSTEM.
appdb=# ALTER SYSTEM SET random_page_cost = 1.1; ALTER SYSTEM appdb=# ALTER SYSTEM SET seq_page_cost = 1.0; ALTER SYSTEM appdb=# ALTER SYSTEM SET effective_cache_size = '12GB'; ALTER SYSTEM appdb=# ALTER SYSTEM SET parallel_setup_cost = 200; ALTER SYSTEM appdb=# ALTER SYSTEM SET parallel_tuple_cost = 0.05; ALTER SYSTEM
ALTER SYSTEM writes to postgresql.auto.conf in the data directory.
ALTER DATABASE and ALTER ROLE provide scoped defaults when only specific databases or clients need different cost assumptions.
- Reload the server configuration to apply the new defaults.
appdb=# SELECT pg_reload_conf(); pg_reload_conf --------------- t (1 row)
- Reconnect to the database to ensure the defaults come from server configuration.
appdb=# \c appdb You are now connected to database "appdb" as user "postgres".
- Confirm the effective values after reload using current_setting().
appdb=# SELECT name, current_setting(name) AS setting FROM (VALUES ('random_page_cost'), ('seq_page_cost'), ('effective_cache_size'), ('parallel_setup_cost'), ('parallel_tuple_cost') ) AS t(name) ORDER BY name; name | setting ---------------------+--------- effective_cache_size | 12GB parallel_setup_cost | 200 parallel_tuple_cost | 0.05 random_page_cost | 1.1 seq_page_cost | 1 (5 rows) - Repeat the representative EXPLAIN (ANALYZE, BUFFERS) run under the persisted defaults.
appdb=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50; ##### snipped #####
Plan changes that only help one query can be reverted, so test a small set of high-value queries before rolling out more adjustments.
- Roll back a problematic change using ALTER SYSTEM RESET.
appdb=# ALTER SYSTEM RESET random_page_cost; ALTER SYSTEM appdb=# SELECT pg_reload_conf(); pg_reload_conf --------------- t (1 row)
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
