PostgreSQL performance tuning keeps query latency predictable while increasing throughput under real workloads. Faster plans and fewer I/O stalls reduce tail latencies and stretch the same hardware budget further.

PostgreSQL spends most time in a few places: planning (statistics-driven), executing (CPU, locks, and waits), reading (shared_buffers plus OS cache), and writing (WAL and checkpoints). Healthy statistics, appropriate indexes, and well-behaved autovacuum keep the planner accurate and tables free of avoidable bloat.

Every change trades one resource for another, so measurements matter more than instincts. Memory settings can cause host pressure, logging can fill disks, and some parameters require a reload or a full restart; examples assume PostgreSQL 13+ with local superuser access via psql on Linux.

PostgreSQL performance tuning checklist:

  1. Check the slow query logging threshold in log_min_duration_statement.
    $ sudo -u postgres psql -Atc "SHOW log_min_duration_statement;"
    -1

    Lower thresholds produce larger logs and can increase disk I/O during traffic spikes.

  2. Verify that pg_stat_statements is loaded from shared_preload_libraries.
    $ sudo -u postgres psql -Atc "SHOW shared_preload_libraries;"
    pg_stat_statements

    Changing shared_preload_libraries requires a PostgreSQL restart, which ends existing sessions.

  3. List the most expensive statements from pg_stat_statements.
    $ sudo -u postgres psql -d appdb -Atc "SELECT calls, round(total_exec_time::numeric, 2) AS total_ms, round(mean_exec_time::numeric, 2) AS mean_ms, left(query, 90) AS query FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;"
    1|4.18|4.18|SELECT count(*) FROM orders
    1|3.21|3.21|CREATE INDEX IF NOT EXISTS events_created_at_idx ON events (created_at DESC)
    1|0.42|0.42|EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM events WHERE created_at > now() - interval $1 ORD
    ##### snipped #####

    The pg_stat_statements view requires CREATE EXTENSION pg_stat_statements; in the target database.

  4. Validate a top statement with EXPLAIN (ANALYZE, BUFFERS).
    $ sudo -u postgres psql -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT id, total FROM public.orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50;"
                                                                     QUERY PLAN                                                                  
    ---------------------------------------------------------------------------------------------------------------------------------------------
     Limit  (cost=291.24..291.37 rows=50 width=22) (actual time=0.226..0.231 rows=50 loops=1)
       Buffers: shared hit=95
       ->  Sort  (cost=291.24..291.49 rows=99 width=22) (actual time=0.225..0.227 rows=50 loops=1)
             Sort Key: created_at DESC
             Sort Method: quicksort  Memory: 31kB
             Buffers: shared hit=95
             ->  Bitmap Heap Scan on orders  (cost=5.06..287.96 rows=99 width=22) (actual time=0.025..0.200 rows=94 loops=1)
                   Recheck Cond: (customer_id = 42)
                   Heap Blocks: exact=87
                   Buffers: shared hit=92
                   ->  Bitmap Index Scan on idx_orders_customer_id  (cost=0.00..5.04 rows=99 width=0) (actual time=0.011..0.012 rows=94 loops=1)
                         Index Cond: (customer_id = 42)
                         Buffers: shared hit=5
     Planning:
       Buffers: shared hit=101
     Planning Time: 0.172 ms
     Execution Time: 0.252 ms
    (17 rows)

    EXPLAIN ANALYZE executes the statement and can cause real load on production systems.

  5. Create an index for a frequent filter or join key.
    $ sudo -u postgres psql -d appdb -c "CREATE INDEX IF NOT EXISTS orders_customer_id_idx ON public.orders (customer_id);"
    CREATE INDEX

    Indexes improve reads but increase write cost, VACUUM work, and storage use.

  6. Run VACUUM (VERBOSE, ANALYZE) on a table with bloat or stale statistics.
    $ sudo -u postgres psql -d appdb -c "VACUUM (VERBOSE, ANALYZE) public.orders;"
    INFO:  vacuuming "appdb.public.orders"
    INFO:  launched 2 parallel vacuum workers for index cleanup (planned: 2)
    ##### snipped #####
    INFO:  analyzing "public.orders"
    INFO:  "orders": scanned 834 of 834 pages, containing 100000 live rows and 0 dead rows; 30000 rows in sample, 100000 estimated total rows
    VACUUM

    Frequent bloat often signals autovacuum thresholds that are too lax for the write rate.

  7. Check the current shared_buffers value before adjusting cache sizing.
    $ sudo -u postgres psql -Atc "SHOW shared_buffers;"
    512MB

    shared_buffers is global memory; oversized values can starve the OS page cache and other services.

  8. Check the current work_mem value before tuning sort and hash operations.
    $ sudo -u postgres psql -Atc "SHOW work_mem;"
    16MB

    work_mem is per operation per query; high values can exhaust RAM under concurrency.

  9. Check WAL sizing to reduce checkpoint spikes on write-heavy workloads.
    $ sudo -u postgres psql -Atc "SHOW max_wal_size;"
    2GB

    Small WAL budgets trigger frequent checkpoints, which can amplify I/O latency.

  10. Check random_page_cost to align planner choices with storage latency.
    $ sudo -u postgres psql -Atc "SHOW random_page_cost;"
    4

    Lower values are common on SSD-backed storage so index plans are not over-penalized.

  11. Evaluate partitioning for very large tables that support partition pruning.

    Partitioning helps when queries, retention, and maintenance can target partitions instead of scanning everything.

  12. List the longest-running non-idle sessions to spot blocking and wait events.
    $ sudo -u postgres psql -d appdb -c "SELECT pid, usename, state, now()-query_start AS age, wait_event_type, wait_event, left(query, 60) AS query FROM pg_stat_activity WHERE state <> 'idle' ORDER BY age DESC LIMIT 10;"
     pid  | usename  | state  |   age    | wait_event_type | wait_event |                            query                             
    ------+----------+--------+----------+-----------------+------------+--------------------------------------------------------------
     2720 | postgres | active | 00:00:00 |                 |            | SELECT pid, usename, state, now()-query_start AS age, wait_e
    (1 row)

    wait_event_type and wait_event highlight contention sources such as locks, WAL, or I/O.

  13. Check the configured connection ceiling in max_connections.
    $ sudo -u postgres psql -Atc "SHOW max_connections;"
    120

    High connection limits increase baseline memory use and can amplify contention during bursts.

  14. Add a connection pooler when client concurrency exceeds sustainable session counts.

    Poolers (such as PgBouncer) reduce process overhead by multiplexing many clients onto fewer backend sessions.

  15. Capture I/O and cache-hit baselines from pg_stat_database for before/after comparisons.
    $ sudo -u postgres psql -d appdb -Atc "SELECT datname, blks_hit, blks_read, round(blks_hit*100.0/nullif(blks_hit+blks_read,0), 2) AS hit_pct FROM pg_stat_database WHERE datname = current_database();"
    appdb|61876|11|99.98

    Repeat the query after each change and compare hit_pct and blks_read under similar load.