Improving Apache performance reduces queueing time during traffic bursts and keeps response latency steadier when one host is serving static files, reverse-proxied requests, or many short application responses. The biggest wins usually come from removing repeated work and keeping worker capacity available, not from chasing one magic directive.
Apache throughput is shaped by the active MPM, how long connections stay open, whether responses can be served from cache or compression filters, and how much work is pushed into PHP or upstream applications. Current Apache HTTP Server 2.4 guidance still centers on the event MPM for idle-connection efficiency, mod_http2 for browser-facing TLS sites, mod_cache for cacheable responses, and shared-memory TLS session caching on busy HTTPS hosts.
Examples below use the Debian and Ubuntu packaging layout with apache2ctl, a2enmod, /etc/apache2, and the apache2 service name. Keep server-status restricted to local or trusted admin access, leave ExtendedStatus Off during normal operation, and make one change at a time so benchmarks, status output, and logs show which adjustment actually helped.
Steps to improve Apache performance:
- Capture a repeatable baseline before changing anything.
$ ab -n 1000 -c 50 -k http://host.example.net/ This is ApacheBench, Version 2.3 <$Revision: 1903618 $> ##### snipped ##### Concurrency Level: 50 Time taken for tests: 0.842 seconds Complete requests: 1000 Failed requests: 0 Keep-Alive requests: 1000 Requests per second: 1187.59 [#/sec] (mean) Time per request: 42.103 [ms] (mean) ##### snipped #####
Keep the URL, Host header, cookies, and concurrency identical for every comparison. Benchmark a representative endpoint, not an admin page or a CDN-cached asset that hides Apache work.
- Check which multi-processing module (MPM) is active.
$ sudo apache2ctl -V | grep -E '^Server MPM:' Server MPM: event
Apache's current event MPM releases idle keepalive sockets back to the listener so worker threads can serve other requests. If the result is prefork, the highest-impact improvement is often moving PHP workloads to PHP-FPM and switching to event or worker.
Related: How to enable the event MPM in Apache
Related: How to configure PHP-FPM with Apache - Use a local-only server-status endpoint to watch worker pressure and keep ExtendedStatus off unless timing detail is specifically needed.
$ curl -sS http://127.0.0.1/server-status?auto | grep -E '^(ServerMPM|BusyWorkers|IdleWorkers):' ServerMPM: event BusyWorkers: 1 IdleWorkers: 49
Never leave server-status open to the public internet. Restrict it to Require local or a tightly controlled admin source list.
Apache's performance guide notes that ExtendedStatus On adds extra per-request timing calls. Leave it off for routine monitoring and enable it only during deeper troubleshooting.
- Review keepalive and timeout settings before lowering them in small steps.
$ grep -R -h -E '^(KeepAlive|MaxKeepAliveRequests|KeepAliveTimeout|Timeout)\b' /etc/apache2/apache2.conf /etc/apache2/conf-enabled /etc/apache2/sites-enabled | sed '/^$/d' Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5
The current Apache performance guide still treats the default KeepAliveTimeout 5 as a tradeoff between network efficiency and worker usage. Lower it only when idle connections are tying up capacity, and prefer the event MPM over disabling keepalive blindly.
Related: How to enable KeepAlive in Apache
- Check the active MPM limits and size MaxRequestWorkers against real memory use plus observed queueing.
$ sudo grep -E '^(ServerLimit|ThreadsPerChild|MaxRequestWorkers|MaxConnectionsPerChild)' /etc/apache2/mods-available/mpm_event.conf ThreadsPerChild 25 MaxRequestWorkers 150 MaxConnectionsPerChild 0
On event and worker, MaxRequestWorkers is bounded by compatible ServerLimit and ThreadsPerChild values. Raising it too far can trigger swapping and make latency worse instead of better.
- Cache static responses aggressively, but keep personalized or protected content out of the fast cache path.
$ curl -sSI https://host.example.net/static/app.css | grep -i -E '^(cache-control|expires|etag):' ETag: "664f4d-19b02" Cache-Control: public, max-age=604800
Apache's caching guide explains that mod_cache can serve hits from the quick handler before most request processing runs. That is a strong performance win for safe cacheable content, but it also means authenticated or per-user responses need deliberate exclusions.
- Compress only text-based responses and verify that Vary: Accept-Encoding is present.
$ curl -sS -H 'Accept-Encoding: br,gzip' -I https://host.example.net/ | grep -i -E '^(content-encoding|vary):' Vary: Accept-Encoding Content-Encoding: br
mod_brotli and mod_deflate are most effective on HTML, CSS, JS, JSON, and similar text payloads. Exclude already-compressed binaries such as JPEG, PNG, WebP, .gz, and video files so CPU is not wasted recompressing them.
Be careful compressing pages that reflect secrets or tokens over TLS. The current mod_brotli documentation still calls out the BREACH class of information disclosure attacks.
- Enable HTTP/2 on the TLS virtual host when the site serves many parallel assets to browsers.
<VirtualHost *:443> ServerName host.example.net Protocols h2 http/1.1 ... </VirtualHost>
Apache supports HTTP/2 in every shipped MPM, but the current upstream guide warns that prefork effectively processes one request at a time per connection. event is usually the better fit when the platform supports it.
If HTTP/3 is part of the plan, terminate QUIC on a CDN or frontend proxy and keep Apache as the origin behind that layer.
Related: How to enable HTTP/2 in Apache
Related: How to enable HTTP/3 in Apache - Confirm that clients actually negotiate HTTP/2 after the change.
$ curl -sS --http2 -I https://host.example.net/ | sed -n '1,3p' HTTP/2 200 content-type: text/html; charset=UTF-8 vary: Accept-Encoding
Use -k only on a staging or self-signed certificate. If the response still shows HTTP/1.1, verify that the request is reaching the updated 443 virtual host and that ssl plus http2 are loaded.
Related: How to enable HTTP/2 in Apache
- Enable shared-memory TLS session caching on busy HTTPS sites so reconnecting clients avoid a full handshake every time.
<IfModule ssl_module> SSLSessionCache shmcb:${APACHE_RUN_DIR}/ssl_scache(512000) SSLSessionCacheTimeout 300 </IfModule>Apache's current mod_ssl documentation still recommends the shared-memory shmcb cache for high-performance inter-process session reuse, and the default SSLSessionCacheTimeout remains 300 seconds.
- Move PHP sites to PHP-FPM when application code is the bottleneck or when mod_php is keeping the server on prefork.
PHP-FPM keeps PHP execution out of the Apache process model so the web tier can use the event MPM and keep idle connections from pinning one process per client.
Related: How to configure PHP-FPM with Apache
Related: How to enable the event MPM in Apache - Remove request-time work that is not required.
$ sudo apache2ctl -M | head Loaded Modules: core_module (static) so_module (static) watchdog_module (static) http_module (static) log_config_module (static) unixd_module (static) version_module (static)
Keep HostnameLookups Off, prefer AllowOverride None where .htaccess overrides are not needed, and disable modules that do not serve the deployed workload. Each of those removes extra parsing or per-request work from the hot path.
- Validate the full configuration after each edit.
$ sudo apache2ctl -t Syntax OK
Syntax checks catch broken includes and invalid directives before a reload turns them into an outage.
Related: How to test Apache configuration
- Reload Apache for normal tuning changes, or restart it when MPM sizing directives changed.
$ sudo systemctl reload apache2 $ sudo systemctl restart apache2
MaxRequestWorkers, ThreadsPerChild, and related MPM sizing directives are startup-time settings. Use a full restart after those edits instead of assuming a reload applied them.
- Re-run the same benchmark and compare it with server-status plus the error log.
$ ab -n 1000 -c 50 -k http://host.example.net/ ##### snipped ##### Complete requests: 1000 Failed requests: 0 Requests per second: 1438.67 [#/sec] (mean) Time per request: 34.754 [ms] (mean) ##### snipped #####
A successful tuning change shows lower latency or better throughput without new errors, swap activity, or worker exhaustion. If output improves in one metric but logs show resets, timeouts, or backend errors, roll back the last change and re-test.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
