The MySQL and MariaDB slow query log pinpoints real queries that waste time and I/O under live data, contention, and traffic, making it the most direct input for practical tuning work (indexes, query rewrites, and workload shaping).
When enabled, the server writes an entry for statements that exceed long_query_time (and optionally for queries not using indexes), capturing timing and workload context such as Query_time, Lock_time, Rows_sent, Rows_examined, and the SQL text. The log can be written to a file (common on Linux) or to the mysql.slow_log table depending on log_output.
Slow logs can contain sensitive literals (emails, tokens, IDs) and can grow quickly on busy systems, so analysis should be done on a copied file and for a bounded time window when possible. Thresholds and logging options materially affect what appears in the log, so confirming the active settings first avoids chasing “missing” queries that never met the logging criteria.
Steps to analyze a MySQL or MariaDB slow query log:
- Print the server slow-log configuration to confirm logging mode, file location, and thresholds.
$ mysql -NBe "SHOW VARIABLES WHERE Variable_name IN ('slow_query_log','log_output','slow_query_log_file','long_query_time','min_examined_row_limit','log_queries_not_using_indexes');" log_output FILE log_queries_not_using_indexes OFF long_query_time 1.000000 min_examined_row_limit 0 slow_query_log ON slow_query_log_file /var/log/mysql/mysql-slow.log
If log_output includes TABLE, entries are stored in mysql.slow_log instead of a file, so log-parsing tools require exporting or switching to FILE.
- Store the reported slow log path in a shell variable.
$ SLOW_LOG='/var/log/mysql/mysql-slow.log'
Quoting the path prevents surprises when the filename contains spaces.
- Check the slow log size and ownership.
$ sudo ls -lh "$SLOW_LOG" -rw-r----- 1 mysql mysql 1.9K Dec 25 00:37 /var/log/mysql/mysql-slow.log
- Preview the newest slow log entry to confirm the format and timestamps.
$ sudo tail -n 20 "$SLOW_LOG" # User@Host: root[root] @ localhost [] Id: 11 # Query_time: 1.501485 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 1 SET timestamp=1766622971; SELECT SLEEP(1.5) AS slept; # Time: 2025-12-25T00:36:14.968562Z # User@Host: root[root] @ localhost [] Id: 11 # Query_time: 1.501302 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 1 SET timestamp=1766622973; SELECT SLEEP(1.5) AS slept; # Time: 2025-12-25T00:36:16.175083Z # User@Host: root[root] @ localhost [] Id: 11 # Query_time: 1.203651 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 1 SET timestamp=1766622974; SELECT SLEEP(1.2) AS slept; # Time: 2025-12-25T00:37:10.121392Z # User@Host: root[root] @ localhost [] Id: 13 # Query_time: 10.842147 Lock_time: 0.000011 Rows_sent: 9 Rows_examined: 9 use appdb; SET timestamp=1766623019; SELECT SLEEP(1.2) AS slept, id, total FROM orders ORDER BY id;
Many formats include a SET timestamp= line plus a Time line; the Z suffix indicates UTC.
- Copy the slow log into a working file for analysis.
$ sudo cp --preserve=mode,timestamps "$SLOW_LOG" "$HOME/mysql-slow.log"
Copying avoids churn from log rotation while reading and keeps repeated parsing off the hot file.
- Change ownership of the working file to avoid repeated sudo reads.
$ sudo chown "$USER":"$USER" "$HOME/mysql-slow.log"
- Rank query fingerprints by average execution time using mysqldumpslow.
$ mysqldumpslow -s at -t 10 "$HOME/mysql-slow.log" Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts Time: N-N-25T00:N:N.962310Z # User@Host: root[root] @ localhost [] Id: N # Query_time: N.N Lock_time: N.N Rows_sent: N Rows_examined: N SET timestamp=N; SELECT SLEEP(N) AS slept Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts # Time: N-N-25T00:N:N.713019Z # User@Host: root[root] @ localhost [] Id: N # Query_time: N.N Lock_time: N.N Rows_sent: N Rows_examined: N SET timestamp=N; SELECT SLEEP(N) AS slept ##### snipped #####
mysqldumpslow abstracts literals by default (N for numbers, S for strings) to group similar statements.
- Rank query fingerprints by frequency to catch fast-but-constant offenders.
$ mysqldumpslow -s c -t 10 "$HOME/mysql-slow.log" Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts Time: N-N-25T00:N:N.968562Z # User@Host: root[root] @ localhost [] Id: N # Query_time: N.N Lock_time: N.N Rows_sent: N Rows_examined: N SET timestamp=N; SELECT SLEEP(N.N) AS slept Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts # Time: N-N-25T00:N:N.959177Z # User@Host: root[root] @ localhost [] Id: N # Query_time: N.N Lock_time: N.N Rows_sent: N Rows_examined: N SET timestamp=N; SELECT SLEEP(N) AS slept ##### snipped #####
High-count queries with “small” per-call time frequently dominate total load when multiplied by traffic.
- Filter the summary to a specific table, verb, or pattern using -g.
$ mysqldumpslow -s at -t 5 -g 'FROM orders' "$HOME/mysql-slow.log" Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts Time: N-N-25T00:N:N.121392Z # User@Host: root[root] @ localhost [] Id: N # Query_time: N.N Lock_time: N.N Rows_sent: N Rows_examined: N use appdb; SET timestamp=N; SELECT SLEEP(N.N) AS slept, id, total FROM orders ORDER BY id
- Re-run the filtered report with -a when a copy/paste query is needed.
$ mysqldumpslow -a -s at -t 1 -g 'FROM orders' "$HOME/mysql-slow.log" Count: 1 Time=0.00s (0s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), 0users@0hosts Time: 2025-12-25T00:37:10.121392Z # User@Host: root[root] @ localhost [] Id: 13 # Query_time: 10.842147 Lock_time: 0.000011 Rows_sent: 9 Rows_examined: 9 use appdb; SET timestamp=1766623019; SELECT SLEEP(1.2) AS slept, id, total FROM orders ORDER BY id
-a can expose sensitive literals from production (emails, tokens, IDs) in terminal scrollback, logs, and tickets.
- Run EXPLAIN on the chosen query to see access type, index use, and row estimates.
$ mysql --database=appdb -e "EXPLAIN SELECT * FROM orders WHERE customer_id=1 AND status='processing' ORDER BY created_at DESC LIMIT 5\G" *************************** 1. row *************************** id: 1 select_type: SIMPLE table: orders partitions: NULL type: ref possible_keys: idx_orders_customer_id key: idx_orders_customer_id key_len: 4 ref: const rows: 3 filtered: 11.11 Extra: Using where; Using filesort
When supported, EXPLAIN ANALYZE measures actual execution timing and can confirm whether changes improved the slow-log signature.
- Generate a detailed digest report with pt-query-digest when available.
$ pt-query-digest --limit=20 "$HOME/mysql-slow.log" > "$HOME/mysql-slow-report.txt"
pt-query-digest is part of Percona Toolkit and provides a richer breakdown than mysqldumpslow (percentiles, totals, and per-query metrics).
- Review the digest report header to confirm time range and top query IDs.
$ head -n 80 "$HOME/mysql-slow-report.txt" # 90ms user time, 10ms system time, 25.00k rss, 0 vsz # Current date: Thu Dec 25 00:42:50 2025 # Hostname: a430d57f45c8 # Files: /work/outputs/mysql-slow.log # Overall: 7 total, 2 unique, 0.02 QPS, 0.06x concurrency ________________ # Time range: 2025-12-25T00:31:36 to 2025-12-25T00:37:10 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 21s 1s 11s 3s 11s 3s 2s # Lock time 11us 0 11us 1us 10us 3us 0 # Rows sent 15 1 9 2.14 8.91 2.77 0.99 # Rows examine 15 1 9 2.14 8.91 2.77 0.99 # Query size 211 24 61 30.14 59.77 12.44 24.84 # Profile # Rank Query ID Response time Calls R/Call V/M # ==== =================================== ============= ===== ======= === # 1 0xF5F6B59EEA0E3D90B07B184D8DABA0A9 10.8421 51.5% 1 10.8421 0.00 SELECT orders # 2 0xD5A3FF61AC9DBB71390FE2FF795B97B2 10.2135 48.5% 6 1.7023 0.06 SELECT ##### snipped #####
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
