The MySQL and MariaDB slow query log pinpoints real queries that waste time and I/O under live data, contention, and traffic, making it the most direct input for practical tuning work (indexes, query rewrites, and workload shaping).
When enabled, the server writes an entry for statements that exceed long_query_time (and optionally for queries not using indexes), capturing timing and workload context such as Query_time, Lock_time, Rows_sent, Rows_examined, and the SQL text. The log can be written to a file (common on Linux) or to the mysql.slow_log table depending on log_output.
Slow logs can contain sensitive literals (emails, tokens, IDs) and can grow quickly on busy systems, so analysis should be done on a copied file and for a bounded time window when possible. Thresholds and logging options materially affect what appears in the log, so confirming the active settings first avoids chasing “missing” queries that never met the logging criteria.
Steps to analyze a MySQL or MariaDB slow query log:
- Print the server slow-log configuration to confirm logging mode, file location, and thresholds.
$ mysql -NBe "SHOW VARIABLES WHERE Variable_name IN ('slow_query_log','log_output','slow_query_log_file','long_query_time','min_examined_row_limit','log_queries_not_using_indexes');" slow_query_log ON log_output FILE slow_query_log_file /var/log/mysql/mysql-slow.log long_query_time 1.000000 min_examined_row_limit 0 log_queries_not_using_indexes OFF
If log_output includes TABLE, entries are stored in mysql.slow_log instead of a file, so log-parsing tools require exporting or switching to FILE.
- Store the reported slow log path in a shell variable.
$ SLOW_LOG='/var/log/mysql/mysql-slow.log'
Quoting the path prevents surprises when the filename contains spaces.
- Check the slow log size and ownership.
$ sudo ls -lh "$SLOW_LOG" -rw-r----- 1 mysql adm 58M Dec 12 21:10 /var/log/mysql/mysql-slow.log
- Preview the newest slow log entry to confirm the format and timestamps.
$ sudo tail -n 20 "$SLOW_LOG" # Time: 2025-12-12T21:09:54.123456Z # User@Host: app[app] @ 10.0.0.15 [] Id: 381729 # Query_time: 2.347891 Lock_time: 0.000102 Rows_sent: 50 Rows_examined: 98231 SET timestamp=1765573794; SELECT * FROM orders WHERE customer_id=123 AND status='paid' ORDER BY created_at DESC LIMIT 50;
Many formats include a SET timestamp= line plus a Time line; the Z suffix indicates UTC.
- Copy the slow log into a working file for analysis.
$ sudo cp --preserve=mode,timestamps "$SLOW_LOG" "$HOME/mysql-slow.log"
Copying avoids churn from log rotation while reading and keeps repeated parsing off the hot file.
- Change ownership of the working file to avoid repeated sudo reads.
$ sudo chown "$USER":"$USER" "$HOME/mysql-slow.log"
- Rank query fingerprints by average execution time using mysqldumpslow.
$ mysqldumpslow -s at -t 10 "$HOME/mysql-slow.log" Reading mysql slow query log from /home/user/mysql-slow.log Count: 5 Time=9.12s (45s) Lock=0.02s (0.10s) Rows=1.0 (5), db=app SELECT token FROM sessions WHERE token='S' LIMIT N; Count: 42 Time=1.85s (77s) Lock=0.00s (0s) Rows=50.0 (2100), db=app SELECT * FROM orders WHERE customer_id=N AND status='S' ORDER BY created_at DESC LIMIT N; Count: 18 Time=1.10s (19s) Lock=0.01s (0.18s) Rows=0.0 (0), db=app UPDATE inventory SET reserved=reserved+N WHERE sku='S';
mysqldumpslow abstracts literals by default (N for numbers, S for strings) to group similar statements.
- Rank query fingerprints by frequency to catch fast-but-constant offenders.
$ mysqldumpslow -s c -t 10 "$HOME/mysql-slow.log" Reading mysql slow query log from /home/user/mysql-slow.log Count: 154 Time=0.08s (12s) Lock=0.00s (0s) Rows=1.0 (154), db=app SELECT id FROM users WHERE email='S' LIMIT N; Count: 98 Time=0.15s (15s) Lock=0.00s (0s) Rows=20.0 (1960), db=app SELECT * FROM audit_log WHERE created_at >= 'S' ORDER BY created_at DESC LIMIT N;
High-count queries with “small” per-call time frequently dominate total load when multiplied by traffic.
- Filter the summary to a specific table, verb, or pattern using -g.
$ mysqldumpslow -s at -t 5 -g 'FROM orders' "$HOME/mysql-slow.log" Reading mysql slow query log from /home/user/mysql-slow.log Count: 42 Time=1.85s (77s) Lock=0.00s (0s) Rows=50.0 (2100), db=app SELECT * FROM orders WHERE customer_id=N AND status='S' ORDER BY created_at DESC LIMIT N; Count: 9 Time=0.92s (8s) Lock=0.00s (0s) Rows=1.0 (9), db=app SELECT COUNT(*) FROM orders WHERE status='S' AND created_at >= 'S';
- Re-run the filtered report with -a when a copy/paste query is needed.
$ mysqldumpslow -a -s at -t 1 -g 'FROM orders' "$HOME/mysql-slow.log" Reading mysql slow query log from /home/user/mysql-slow.log Count: 42 Time=1.85s (77s) Lock=0.00s (0s) Rows=50.0 (2100), db=app SELECT * FROM orders WHERE customer_id=123 AND status='paid' ORDER BY created_at DESC LIMIT 50;
-a can expose sensitive literals from production (emails, tokens, IDs) in terminal scrollback, logs, and tickets.
- Run EXPLAIN on the chosen query to see access type, index use, and row estimates.
$ mysql --database=app -e "EXPLAIN SELECT * FROM orders WHERE customer_id=123 AND status='paid' ORDER BY created_at DESC LIMIT 50\G" *************************** 1. row *************************** id: 1 select_type: SIMPLE table: orders type: ref possible_keys: idx_orders_customer_status_created key: idx_orders_customer_status_created key_len: 8 ref: const,const rows: 120 Extra: Using where; Using index
When supported, EXPLAIN ANALYZE measures actual execution timing and can confirm whether changes improved the slow-log signature.
- Generate a detailed digest report with pt-query-digest when available.
$ pt-query-digest --limit=20 "$HOME/mysql-slow.log" > "$HOME/mysql-slow-report.txt"
pt-query-digest is part of Percona Toolkit and provides a richer breakdown than mysqldumpslow (percentiles, totals, and per-query metrics).
- Review the digest report header to confirm time range and top query IDs.
$ head -n 80 "$HOME/mysql-slow-report.txt" # 140ms user time, 20ms system time, 26.00M rss, 90.00M vsz # Current date: Fri Dec 12 21:12:30 2025 # Hostname: db01 # Files: /home/user/mysql-slow.log # Overall: 326 total, 0.09 QPS, 0.16x concurrency ____________ # Time range: 2025-12-12T20:40:01 to 2025-12-12T21:10:05 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 141s 2ms 9s 433ms 2s 920ms 120ms ##### snipped #####
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
