Slow queries account for most database latency and wasted CPU/IO, even when the average query is fast. Finding the statements that dominate total execution time turns performance work into a short, prioritized list instead of guesswork.
PostgreSQL can surface expensive statements from two angles: per-event logging (slow query logs) and aggregated statistics collected inside the server. The pg_stat_statements extension aggregates execution metrics per normalized query, making it possible to rank statements by total_exec_time, mean_exec_time, calls, and returned rows.
Enabling pg_stat_statements requires adding it to shared_preload_libraries and restarting PostgreSQL, which is disruptive on busy systems. Statistics collection adds overhead and the shared view can expose statement text, so access should be restricted and changes scheduled during a maintenance window. The extension is created per-database, so it must exist in each database that should be monitored.
Related: How to log slow queries in PostgreSQL \\
Related: How to explain a query plan in PostgreSQL
Steps to analyze slow queries in PostgreSQL:
- Confirm the active configuration file path before editing.
$ sudo -u postgres psql -Atc "SHOW config_file;" /var/lib/postgresql/data/postgresql.conf
- Add pg_stat_statements to shared_preload_libraries in the active postgresql.conf file.
shared_preload_libraries = 'pg_stat_statements'
If other libraries are already listed, append pg_stat_statements separated by a comma.
- Restart PostgreSQL to load the preloaded extension.
$ sudo systemctl restart postgresql
Restarting drops active connections.
- Create the extension in the database that should be monitored.
$ sudo -u postgres psql -d appdb -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;" NOTICE: extension "pg_stat_statements" already exists, skipping CREATE EXTENSION
Replace appdb with the target database name, and repeat for each monitored database.
- Reset collected statement statistics to start a clean measurement window.
$ sudo -u postgres psql -d appdb -c "SELECT pg_stat_statements_reset();" pg_stat_statements_reset -------------------------- (1 row)
This resets statistics for the entire PostgreSQL instance, not just one database.
- List the slowest statements by total or average execution time.
$ sudo -u postgres psql -d appdb -c "SELECT queryid, calls, round(total_exec_time::numeric,2) AS total_ms, round(mean_exec_time::numeric,2) AS mean_ms, rows, left(query,120) AS query FROM pg_stat_statements WHERE dbid = (SELECT oid FROM pg_database WHERE datname = current_database()) ORDER BY total_exec_time DESC LIMIT 10;" queryid | calls | total_ms | mean_ms | rows | query ----------------------+-------+----------+---------+------+----------------------------------------------------------------------------------------------- -8049075132994383957 | 1 | 4.18 | 4.18 | 1 | SELECT count(*) FROM orders 4690036726454562706 | 1 | 3.21 | 3.21 | 0 | CREATE INDEX IF NOT EXISTS events_created_at_idx ON events (created_at DESC) 6337060538879424372 | 1 | 0.32 | 0.32 | 50 | SELECT * FROM orders WHERE customer_id = $1 ORDER BY created_at DESC LIMIT $2 ##### snipped #####Replace ORDER BY total_exec_time with ORDER BY mean_exec_time to rank by average time, and run as postgres (or a role with pg_read_all_stats) to see full statement text.
- Validate one top statement using EXPLAIN ANALYZE and track what changes the plan.
$ sudo -u postgres psql -d appdb -c "EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM events WHERE created_at > now() - interval '7 days' ORDER BY created_at DESC LIMIT 50;" QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.29..4.45 rows=50 width=64) (actual time=0.018..0.057 rows=50 loops=1) Buffers: shared hit=51 -> Index Scan using events_created_at_idx on events (cost=0.29..395.27 rows=4748 width=64) (actual time=0.018..0.053 rows=50 loops=1) Index Cond: (created_at > (now() - '7 days'::interval)) Buffers: shared hit=51 Planning: Buffers: shared hit=101 Planning Time: 0.242 ms Execution Time: 0.078 ms (9 rows)EXPLAIN ANALYZE executes the statement; avoid running write queries on production data without an explicit rollback plan.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
