Tuning MaxRequestWorkers sets the concurrency ceiling for Apache requests, which directly impacts throughput, tail latency, and how gracefully traffic spikes are absorbed. A correct limit prevents avoidable queuing timeouts on the low end, plus swap thrash or out-of-memory crashes on the high end.

The directive is enforced by the active MPM (multi-processing module): prefork uses one worker process per request, while worker and event use multiple threads per process. The usable ceiling is constrained by related directives such as ServerLimit, ThreadLimit, and ThreadsPerChild, plus application behavior that can keep workers busy (slow backends, long-running requests, large uploads).

Changes should be driven by measurement. A MaxRequestWorkers setting that is too high can push the host into swapping or trigger the OOM killer, while a setting that is too low can cause 503 responses plus “server reached MaxRequestWorkers” warnings in the error log. MPM sizing directives are applied at startup, so plan for a full service restart when changing worker limits.

Steps to tune MaxRequestWorkers in Apache:

  1. Identify the active multi-processing module (MPM).
    $ apache2ctl -V 2>/dev/null | grep -E '^Server MPM:'
    Server MPM:     event
  2. Check the configured worker limits in the active MPM configuration file.
    $ sudo grep -E '^(ThreadLimit|ThreadsPerChild|MaxRequestWorkers|MaxConnectionsPerChild)' /etc/apache2/mods-available/mpm_event.conf
    ThreadLimit             64
    ThreadsPerChild         25
    MaxRequestWorkers       150
    MaxConnectionsPerChild  0

    MaxRequestWorkers is capped by ServerLimit * ThreadsPerChild on event and worker, or by ServerLimit on prefork.

  3. Check the error log for worker saturation warnings.
    $ sudo grep -n 'reached MaxRequestWorkers' /var/log/apache2/error.log || echo "No MaxRequestWorkers warnings found"
    No MaxRequestWorkers warnings found
  4. Inspect typical Apache child process RSS to estimate memory pressure at peak load.
    $ ps -o pid,rss,cmd -C apache2 --sort=-rss | head -n 6
        PID   RSS CMD
       9575 10224 /usr/sbin/apache2 -k start
      10323  7284 /usr/sbin/apache2 -k start
      10322  7280 /usr/sbin/apache2 -k start

    Run during representative traffic to avoid sizing from idle RSS.

  5. Locate the MPM configuration file that controls worker limits on Debian or Ubuntu.
    $ ls -la /etc/apache2/mods-available/mpm_*.conf
    -rw-r--r-- 1 root root 613 Mar 18  2024 /etc/apache2/mods-available/mpm_event.conf
    -rw-r--r-- 1 root root 500 Mar 18  2024 /etc/apache2/mods-available/mpm_prefork.conf
    -rw-r--r-- 1 root root 780 Mar 18  2024 /etc/apache2/mods-available/mpm_worker.conf

    RHEL-family layouts commonly use /etc/httpd/conf.modules.d/00-mpm.conf with the httpd service name.

  6. Edit the matching /etc/apache2/mods-available/mpm_*.conf file to set MaxRequestWorkers with compatible ThreadLimit and ThreadsPerChild values.
    <IfModule mpm_event_module>
        StartServers            2
        MinSpareThreads         25
        MaxSpareThreads         75
        ThreadLimit             64
        ThreadsPerChild         25
        MaxRequestWorkers      200
        MaxConnectionsPerChild  0
    </IfModule>

    For event or worker, ServerLimit * ThreadsPerChild must be at least MaxRequestWorkers, plus ThreadsPerChild must not exceed ThreadLimit.

  7. Validate the configuration syntax.
    $ sudo apache2ctl configtest
  8. Restart the apache2 service to apply MPM limit changes.
    $ sudo systemctl restart apache2

    MPM sizing directives are read at startup; a reload may not apply worker limit changes.

  9. Confirm the new MaxRequestWorkers value in the active MPM configuration file.
    $ sudo grep -E '^(ThreadLimit|ThreadsPerChild|MaxRequestWorkers|MaxConnectionsPerChild)' /etc/apache2/mods-available/mpm_event.conf
    ThreadLimit             64
    ThreadsPerChild         25
    MaxRequestWorkers       200
    MaxConnectionsPerChild  0
  10. Read BusyWorkers and IdleWorkers from the server-status?auto endpoint.
    $ curl -sS http://127.0.0.1/server-status?auto | grep -E '^(ServerMPM|BusyWorkers|IdleWorkers):'
    ServerMPM: event
    BusyWorkers: 1
    IdleWorkers: 49
  11. Run a load test against a representative URL to compare latency and error rate.
    $ ab -n 2000 -c 200 -k http://127.0.0.1/
    ##### snipped #####
    Concurrency Level:      200
    Time taken for tests:   0.068 seconds
    Complete requests:      2000
    Failed requests:        402
       (Connect: 0, Receive: 0, Length: 397, Exceptions: 5)
    Keep-Alive requests:    1603
    ##### snipped #####
    Requests per second:    29381.09 [#/sec] (mean)
    Time per request:       6.807 [ms] (mean)
    Time per request:       0.034 [ms] (mean, across all concurrent requests)
    Transfer rate:          252536.36 [Kbytes/sec] received

    Aggressive concurrency can overload production services; schedule tests for a safe window or use a staging endpoint.