Planner cost settings determine how PostgreSQL weighs I/O, CPU, caching, and parallelism when choosing a query plan, so realistic values help the optimizer prefer plans that match the actual hardware and workload.
The planner compares alternative strategies (sequential scans, index scans, joins, sorts, parallel plans) using table statistics plus cost constants such as random_page_cost for non-sequential reads, cpu_tuple_cost for row processing, and parallel_setup_cost for parallel overhead. These constants are not milliseconds, but relative weights used to rank plans against each other.
Cost tuning is sensitive to storage latency, cache size, and query mix, and aggressive changes can shift plans in ways that speed up one query while slowing others. Measurements from EXPLAIN (ANALYZE, BUFFERS) on representative queries, plus incremental parameter changes, keep results interpretable, and stale statistics should be refreshed before drawing conclusions.
Related: How to explain a query plan in PostgreSQL \\
Related: How to optimize PostgreSQL performance
$ psql -U postgres -d appdb -c "SELECT current_database();" current_database ------------------ appdb (1 row)
Local servers using peer authentication commonly prefer sudo -u postgres psql -d appdb.
$ psql -U postgres -d appdb -c "ANALYZE;" ANALYZE
ANALYZE can consume I/O and CPU on large databases, so schedule it for low-traffic windows when needed.
$ psql -U postgres -d appdb -c "SELECT name, current_setting(name) AS setting FROM (VALUES
('random_page_cost'),
('seq_page_cost'),
('cpu_tuple_cost'),
('cpu_index_tuple_cost'),
('cpu_operator_cost'),
('effective_cache_size'),
('parallel_setup_cost'),
('parallel_tuple_cost')
) AS t(name)
ORDER BY name;"
name | setting
----------------------+---------
cpu_index_tuple_cost | 0.005
cpu_operator_cost | 0.0025
cpu_tuple_cost | 0.01
effective_cache_size | 4GB
parallel_setup_cost | 1000
parallel_tuple_cost | 0.1
random_page_cost | 4
seq_page_cost | 1
(8 rows)
$ psql -U postgres -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50;"
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..182.09 rows=50 width=34) (actual time=0.019..0.145 rows=50 loops=1)
Buffers: shared hit=71
-> Index Scan using orders_customer_id_created_at_idx on orders (cost=0.42..719.85 rows=198 width=34) (actual time=0.018..0.140 rows=50 loops=1)
Index Cond: (customer_id = 42)
Buffers: shared hit=71
Planning:
Buffers: shared hit=133
Planning Time: 0.282 ms
Execution Time: 0.168 ms
(9 rows)
EXPLAIN (ANALYZE) executes the statement, so avoid data-changing SQL or run it inside a transaction that can be rolled back.
Replace the example query with a real workload query to measure meaningful plan shifts.
$ psql -U postgres -d appdb -c "SET random_page_cost = 1.1; SET seq_page_cost = 1.0; SET effective_cache_size = '12GB'; SET parallel_setup_cost = 200; SET parallel_tuple_cost = 0.05;" SET SET SET SET SET
random_page_cost reflects the relative penalty for non-sequential reads, so SSD-backed storage often starts around 1.1 to 1.5 while spinning disks frequently stay closer to 3 to 4.
effective_cache_size is an estimate for cached data, not a memory reservation.
parallel_setup_cost and parallel_tuple_cost bias the planner toward or away from parallel plans, so validate parallel changes against the actual query mix.
$ psql -U postgres -d appdb -c "SET random_page_cost = 1.1; SET seq_page_cost = 1.0; SET effective_cache_size = '12GB'; SET parallel_setup_cost = 200; SET parallel_tuple_cost = 0.05; EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50;"
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..51.01 rows=50 width=34) (actual time=0.019..0.135 rows=50 loops=1)
Buffers: shared hit=69
-> Index Scan using orders_customer_id_created_at_idx on orders (cost=0.42..200.77 rows=198 width=34) (actual time=0.019..0.130 rows=50 loops=1)
Index Cond: (customer_id = 42)
Buffers: shared hit=69
Planning:
Buffers: shared hit=133
Planning Time: 0.304 ms
Execution Time: 0.160 ms
(9 rows)
$ psql -U postgres -d appdb -c "RESET random_page_cost; RESET seq_page_cost; RESET effective_cache_size; RESET parallel_setup_cost; RESET parallel_tuple_cost;" RESET RESET RESET RESET RESET
$ psql -U postgres -d appdb -c "ALTER SYSTEM SET random_page_cost = 1.1;" ALTER SYSTEM $ psql -U postgres -d appdb -c "ALTER SYSTEM SET seq_page_cost = 1.0;" ALTER SYSTEM $ psql -U postgres -d appdb -c "ALTER SYSTEM SET effective_cache_size = '12GB';" ALTER SYSTEM $ psql -U postgres -d appdb -c "ALTER SYSTEM SET parallel_setup_cost = 200;" ALTER SYSTEM $ psql -U postgres -d appdb -c "ALTER SYSTEM SET parallel_tuple_cost = 0.05;" ALTER SYSTEM
ALTER SYSTEM writes to postgresql.auto.conf in the data directory.
ALTER DATABASE and ALTER ROLE provide scoped defaults when only specific databases or clients need different cost assumptions.
$ psql -U postgres -d appdb -c "SELECT pg_reload_conf();" pg_reload_conf ---------------- t (1 row)
$ psql -U postgres -d appdb -c "SELECT current_database();" current_database ------------------ appdb (1 row)
$ psql -U postgres -d appdb -c "SELECT name, current_setting(name) AS setting FROM (VALUES
('random_page_cost'),
('seq_page_cost'),
('effective_cache_size'),
('parallel_setup_cost'),
('parallel_tuple_cost')
) AS t(name)
ORDER BY name;"
name | setting
----------------------+---------
effective_cache_size | 12GB
parallel_setup_cost | 200
parallel_tuple_cost | 0.05
random_page_cost | 1.1
seq_page_cost | 1
(5 rows)
$ psql -U postgres -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50;"
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..51.34 rows=50 width=34) (actual time=0.019..0.124 rows=50 loops=1)
Buffers: shared hit=47
-> Index Scan using orders_customer_id_created_at_idx on orders (cost=0.42..192.92 rows=189 width=34) (actual time=0.018..0.118 rows=50 loops=1)
Index Cond: (customer_id = 42)
Buffers: shared hit=47
Planning:
Buffers: shared hit=133
Planning Time: 0.297 ms
Execution Time: 0.143 ms
(9 rows)
Plan changes that only help one query can be reverted, so test a small set of high-value queries before rolling out more adjustments.
$ psql -U postgres -d appdb -c "ALTER SYSTEM RESET random_page_cost;" ALTER SYSTEM $ psql -U postgres -d appdb -c "SELECT pg_reload_conf();" pg_reload_conf ---------------- t (1 row)