PostgreSQL performance tuning keeps query latency predictable while increasing throughput under real workloads. Faster plans and fewer I/O stalls reduce tail latencies and stretch the same hardware budget further.

PostgreSQL spends most time in a few places: planning (statistics-driven), executing (CPU, locks, and waits), reading (shared_buffers plus OS cache), and writing (WAL and checkpoints). Healthy statistics, appropriate indexes, and well-behaved autovacuum keep the planner accurate and tables free of avoidable bloat.

Every change trades one resource for another, so measurements matter more than instincts. Memory settings can cause host pressure, logging can fill disks, and some parameters require a reload or a full restart; examples assume PostgreSQL 13+ with local superuser access via psql on Linux.

Steps to optimize PostgreSQL performance:

  1. Check the slow query logging threshold in log_min_duration_statement.
    $ sudo -u postgres psql -Atc "SHOW log_min_duration_statement;"
    500ms

    Lower thresholds produce larger logs and can increase disk I/O during traffic spikes.

  2. Verify that pg_stat_statements is loaded from shared_preload_libraries.
    $ sudo -u postgres psql -Atc "SHOW shared_preload_libraries;"
    pg_stat_statements

    Changing shared_preload_libraries requires a PostgreSQL restart, which ends existing sessions.

  3. List the most expensive statements from pg_stat_statements.
    $ sudo -u postgres psql -d appdb -Atc "SELECT calls, round(total_exec_time::numeric, 2) AS total_ms, round(mean_exec_time::numeric, 2) AS mean_ms, left(query, 90) AS query FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;"
    2|10020.69|5010.34|SELECT pg_sleep($1)
    1|21.55|21.55|CREATE DATABASE appdb_restore
    1|4.44|4.44|CREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public
    1|4.17|4.17|CREATE TABLE events_2025_03 PARTITION OF events FOR VALUES FROM ('2025-03-01 00
    1|3.59|3.59|CREATE INDEX events_event_type_idx ON events (event_type)

    The pg_stat_statements view requires CREATE EXTENSION pg_stat_statements; in the target database.

  4. Validate a top statement with EXPLAIN (ANALYZE, BUFFERS).
    $ sudo -u postgres psql -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT id, total FROM public.orders WHERE customer_id = 42 ORDER BY created_at DESC LIMIT 50;"
                                                                     QUERY PLAN                                                                  
    ---------------------------------------------------------------------------------------------------------------------------------------------
     Limit  (cost=9.52..9.54 rows=10 width=22) (actual time=0.023..0.025 rows=10 loops=1)
       Buffers: shared hit=8
       ->  Sort  (cost=9.52..9.54 rows=10 width=22) (actual time=0.023..0.024 rows=10 loops=1)
             Sort Key: created_at DESC
             Sort Method: quicksort  Memory: 25kB
             Buffers: shared hit=8
             ->  Bitmap Heap Scan on orders  (cost=4.23..9.35 rows=10 width=22) (actual time=0.008..0.012 rows=10 loops=1)
                   Recheck Cond: (customer_id = 42)
                   Heap Blocks: exact=4
                   Buffers: shared hit=5
                   ->  Bitmap Index Scan on idx_orders_customer_id  (cost=0.00..4.22 rows=10 width=0) (actual time=0.004..0.004 rows=10 loops=1)
                         Index Cond: (customer_id = 42)
                         Buffers: shared hit=1
     Planning:
       Buffers: shared hit=110
     Planning Time: 0.193 ms
     Execution Time: 0.047 ms
    (17 rows)

    EXPLAIN ANALYZE executes the statement and can cause real load on production systems.

  5. Create an index for a frequent filter or join key.
    $ sudo -u postgres psql -d appdb -c "CREATE INDEX IF NOT EXISTS orders_customer_id_idx ON public.orders (customer_id);"
    CREATE INDEX

    Indexes improve reads but increase write cost, VACUUM work, and storage use.

  6. Run VACUUM (VERBOSE, ANALYZE) on a table with bloat or stale statistics.
    $ sudo -u postgres psql -d appdb -c "VACUUM (VERBOSE, ANALYZE) public.orders;"
    INFO:  vacuuming "appdb.public.orders"
    INFO:  finished vacuuming "appdb.public.orders": index scans: 0
    pages: 0 removed, 5 remain, 5 scanned (100.00% of total)
    tuples: 0 removed, 500 remain, 0 are dead but not yet removable
    ##### snipped #####
    INFO:  analyzing "public.orders"
    INFO:  "orders": scanned 5 of 5 pages, containing 500 live rows and 0 dead rows; 500 rows in sample, 500 estimated total rows
    VACUUM

    Frequent bloat often signals autovacuum thresholds that are too lax for the write rate.

  7. Check the current shared_buffers value before adjusting cache sizing.
    $ sudo -u postgres psql -Atc "SHOW shared_buffers;"
    128MB

    shared_buffers is global memory; oversized values can starve the OS page cache and other services.

  8. Check the current work_mem value before tuning sort and hash operations.
    $ sudo -u postgres psql -Atc "SHOW work_mem;"
    4MB

    work_mem is per operation per query; high values can exhaust RAM under concurrency.

  9. Check WAL sizing to reduce checkpoint spikes on write-heavy workloads.
    $ sudo -u postgres psql -Atc "SHOW max_wal_size;"
    1GB

    Small WAL budgets trigger frequent checkpoints, which can amplify I/O latency.

  10. Check random_page_cost to align planner choices with storage latency.
    $ sudo -u postgres psql -Atc "SHOW random_page_cost;"
    4

    Lower values are common on SSD-backed storage so index plans are not over-penalized.

  11. Evaluate partitioning for very large tables that support partition pruning.

    Partitioning helps when queries, retention, and maintenance can target partitions instead of scanning everything.

  12. List the longest-running non-idle sessions to spot blocking and wait events.
    $ sudo -u postgres psql -d appdb -c "SELECT pid, usename, state, now()-query_start AS age, wait_event_type, wait_event, left(query, 60) AS query FROM pg_stat_activity WHERE state <> 'idle' ORDER BY age DESC LIMIT 10;"
     pid  | usename  | state  |   age    | wait_event_type | wait_event |                            query                             
    ------+----------+--------+----------+-----------------+------------+--------------------------------------------------------------
     7716 | postgres | active | 00:00:00 |                 |            | SELECT pid, usename, state, now()-query_start AS age, wait_e
    (1 row)

    wait_event_type and wait_event highlight contention sources such as locks, WAL, or I/O.

  13. Check the configured connection ceiling in max_connections.
    $ sudo -u postgres psql -Atc "SHOW max_connections;"
    120

    High connection limits increase baseline memory use and can amplify contention during bursts.

  14. Add a connection pooler when client concurrency exceeds sustainable session counts.

    Poolers (such as PgBouncer) reduce process overhead by multiplexing many clients onto fewer backend sessions.

  15. Capture I/O and cache-hit baselines from pg_stat_database for before/after comparisons.
    $ sudo -u postgres psql -d appdb -Atc "SELECT datname, blks_hit, blks_read, round(blks_hit*100.0/nullif(blks_hit+blks_read,0), 2) AS hit_pct FROM pg_stat_database WHERE datname = current_database();"
    appdb|72715|1592|97.86

    Repeat the query after each change and compare hit_pct and blks_read under similar load.