Query performance determines whether MySQL and MariaDB stay responsive under load or spiral into timeouts. Efficient queries reduce CPU burn, shrink lock hold times, and cut disk I/O, keeping latency predictable as tables and concurrency grow.

The SQL optimizer chooses an execution plan by weighing indexes, column statistics, join strategies, and cost estimates. The slow query log supplies real-world candidates for tuning, and EXPLAIN shows how each statement executes—index lookups versus scans, plus indicators such as temporary tables or filesorts.

Most improvements involve trade-offs: extra indexes speed reads while increasing write cost, logging adds overhead plus storage churn, and table maintenance can disrupt production workloads. Schema changes and configuration changes benefit from a baseline, controlled rollout, and a rollback path when plans regress or DDL runs longer than expected.

MySQL/MariaDB query optimization checklist:

  1. Verify slow query logging settings for representative capture.
    mysql> SHOW VARIABLES LIKE 'slow_query_log';
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | slow_query_log | ON   |
    +---------------+-------+
    1 row in set (0.00 sec)
    
    mysql> SHOW VARIABLES LIKE 'slow_query_log_file';
    +---------------------+------------------------------+
    | Variable_name       | Value                        |
    +---------------------+------------------------------+
    | slow_query_log_file | /var/log/mysql/mysql-slow.log |
    +---------------------+------------------------------+
    1 row in set (0.00 sec)
    
    mysql> SHOW VARIABLES LIKE 'long_query_time';
    +---------------+----------+
    | Variable_name | Value    |
    +---------------+----------+
    | long_query_time | 1.000000 |
    +---------------+----------+
    1 row in set (0.00 sec)
    
    mysql> SHOW VARIABLES LIKE 'log_queries_not_using_indexes';
    +------------------------------+-------+
    | Variable_name                | Value |
    +------------------------------+-------+
    | log_queries_not_using_indexes | OFF  |
    +------------------------------+-------+
    1 row in set (0.00 sec)

    slow_query_log must be ON to collect slow statements.

    long_query_time is in seconds. Lowering it increases log volume.

    log_queries_not_using_indexes is useful for short captures. It can be noisy on OLTP workloads.

  2. Summarize the slow query log to select tuning targets.
    $ sudo mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log
    Reading mysql slow query log from /var/log/mysql/mysql-slow.log
    
    Count: 23  Time=1.12s (25s)  Lock=0.00s (0s)  Rows=20.0 (460), app[app]@10.0.0.5
      SELECT order_id, total FROM orders WHERE customer_id = N ORDER BY created_at DESC LIMIT N
    
    Count: 9  Time=0.98s (8s)  Lock=0.01s (0s)  Rows=1.0 (9), app[app]@10.0.0.5
      SELECT * FROM customers WHERE email = 'S'
    
    ##### snipped #####

    Sort by total time (t) or call count (c) to find the biggest wins.

    MariaDB packages may provide mariadb-dumpslow as an alias for mysqldumpslow.

    Use the log path from slow_query_log_file. Common locations include /var/log/mysql/mysql-slow.log.

  3. Run EXPLAIN for a top slow statement to identify plan bottlenecks.
    mysql> EXPLAIN SELECT order_id, total
        -> FROM orders
        -> WHERE customer_id = 123
        -> ORDER BY created_at DESC
        -> LIMIT 20\G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: orders
             type: ref
    possible_keys: idx_customer_id
              key: idx_customer_id
          key_len: 4
              ref: const
             rows: 2150
            Extra: Using where; Using filesort
    1 row in set (0.00 sec)

    Red flags include type: ALL, Using filesort, Using temporary, or a high rows estimate.

    Predicates that wrap an indexed column in a function often prevent index use.

  4. Create a composite index that matches the query predicate plus sort order.
    mysql> CREATE INDEX idx_customer_created ON orders (customer_id, created_at);
    Query OK, 0 rows affected (2.31 sec)
    Records: 0  Duplicates: 0  Warnings: 0

    Building indexes increases write overhead plus disk usage. Long-running DDL can hold metadata locks that stall writes.

    Composite index order typically follows the most selective equality filters first, then range filters, then ordering columns.

  5. Refresh optimizer statistics after large data churn.
    mysql> ANALYZE TABLE orders;
    +---------------+---------+----------+----------+
    | Table         | Op      | Msg_type | Msg_text |
    +---------------+---------+----------+----------+
    | shop.orders   | analyze | status   | OK       |
    +---------------+---------+----------+----------+
    1 row in set (0.12 sec)

    Run ANALYZE TABLE after bulk loads or large deletes to refresh statistics.

  6. Re-run EXPLAIN to confirm index usage for the tuned statement.
    mysql> EXPLAIN SELECT order_id, total
        -> FROM orders
        -> WHERE customer_id = 123
        -> ORDER BY created_at DESC
        -> LIMIT 20\G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: orders
             type: ref
    possible_keys: idx_customer_created,idx_customer_id
              key: idx_customer_created
          key_len: 4
              ref: const
             rows: 20
            Extra: Using where
    1 row in set (0.00 sec)

    Expect key to match the intended index plus reduced rows. Plan regressions often show up as scans or filesorts.

  7. Use OPTIMIZE TABLE only when reclaiming space or addressing fragmentation is required.
    mysql> OPTIMIZE TABLE orders;
    +---------------+----------+----------+----------+
    | Table         | Op       | Msg_type | Msg_text |
    +---------------+----------+----------+----------+
    | shop.orders   | optimize | status   | OK       |
    +---------------+----------+----------+----------+
    1 row in set (12.84 sec)

    OPTIMIZE TABLE can rebuild the table. On large tables it can run for hours or block depending on engine and version.

  8. Validate improvements with a post-change slow-log summary plus workload metrics.
    $ sudo mysqldumpslow -s c -t 10 /var/log/mysql/mysql-slow.log
    Reading mysql slow query log from /var/log/mysql/mysql-slow.log
    
    Count: 3  Time=0.21s (1s)  Lock=0.00s (0s)  Rows=20.0 (60), app[app]@10.0.0.5
      SELECT order_id, total FROM orders WHERE customer_id = N ORDER BY created_at DESC LIMIT N
    
    ##### snipped #####
    
    mysql> SHOW GLOBAL STATUS LIKE 'Slow_queries';
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | Slow_queries  | 1842  |
    +---------------+-------+
    1 row in set (0.00 sec)

    Compare the same workload window before and after changes. Track latency percentiles, CPU, buffer pool reads, lock waits.

Discuss the article:

Comment anonymously. Login not required.