Improving Nginx performance reduces queueing during traffic bursts and keeps latency steadier when one host is serving static files, reverse-proxied application traffic, or many short HTTPS requests. The biggest gains usually come from measuring the real bottleneck first, then removing repeated work without exhausting worker capacity.
Nginx uses an event-driven worker model, so throughput is shaped by the worker count, the connections each worker can keep open, how efficiently client and upstream connections are reused, and how much response work can be avoided through compression, caching, and local file metadata reuse. Current upstream documentation also reflects recent proxy defaults: proxy_http_version now defaults to 1.1 and upstream keepalive caching is enabled by default in 1.29.7, so older reverse-proxy snippets should be reviewed before they are copied forward unchanged.
Examples below use the common Linux packaging layout with /etc/nginx/nginx.conf, systemctl, and local-only status checks. Keep stub_status restricted to trusted admin access, note that the HTTP/2 and HTTP/3 modules are not built by default in every Nginx build, and treat HTTP/3 as optional because the current upstream module is still marked experimental. Test with nginx -t before every reload, and make one measured change at a time so the benchmark and status counters still explain the result.
Related: How to benchmark Nginx with wrk
Related: How to enable the Nginx stub_status page
Related: How to test Nginx configuration
$ wrk -t2 -c20 -d5s http://host.example.net/
Running 5s test @ http://host.example.net/
2 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.25ms 9.81ms 130.66ms 93.24%
Req/Sec 1.86k 730.52 3.91k 68.00%
18578 requests in 5.04s, 11.16MB read
Requests/sec: 3686.85
Transfer/sec: 2.22MB
Keep the same URL, Host header, cookies, thread count, concurrency, and duration for every comparison run. Benchmark a representative uncached endpoint, not an admin page or a CDN-served asset that hides Nginx work.
Related: How to benchmark Nginx with wrk
$ curl -s http://127.0.0.1/nginx_status Active connections: 1 server accepts handled requests 33 33 18599 Reading: 0 Writing: 1 Waiting: 0
Never leave stub_status open to the public internet. Restrict it to 127.0.0.1, ::1, or a tightly controlled admin source list.
If nginx -t reports an unknown stub_status directive, the build was compiled without --with-http_stub_status_module.
$ sudo nginx -T 2>/dev/null | grep -nE '^\s*(worker_processes|worker_connections|keepalive_timeout|keepalive_requests|gzip|ssl_session_cache|open_file_cache|proxy_http_version)\b' 2:worker_processes auto; 12: worker_connections 1024; 34: keepalive_timeout 65; 35: keepalive_requests 1000; 54: gzip on; 75: ssl_session_cache shared:SSL:10m;
Use nginx -T when tuning because it validates the config and shows the loaded include tree, which is the fastest way to see whether a directive is already being set in another snippet.
Related: How to test Nginx configuration
user www-data; worker_processes auto; pid /run/nginx.pid;
Upstream Nginx still documents worker_processes 1 as the default, while auto detects the available CPU cores and is the better starting point for most dedicated hosts.
If the service runs inside a container or cgroup with a tighter CPU quota than the visible core count, replace auto with an explicit worker count that matches the quota.
events {
worker_connections 4096;
}
worker_connections is a per-worker limit and includes upstream sockets as well as client sockets. The official Nginx docs also note that the real ceiling cannot exceed the current open-file limit, so higher values usually require a matching service or OS file-descriptor increase.
http {
keepalive_timeout 15s 15s;
keepalive_requests 1000;
##### snipped #####
}
Lowering keepalive_timeout reduces how long idle clients occupy a connection slot, while keepalive_requests limits the amount of work done on one connection before it is recycled to release per-connection memory.
Do not shorten the timeout blindly on browser-facing sites behind a load balancer or CDN. Align it with any frontend idle timeout so clients do not see avoidable resets.
Related: How to enable keepalive in Nginx
$ sudo nginx -T 2>/dev/null | grep -nE '^\s*(proxy_http_version|proxy_set_header Connection|keepalive)\b' ##### snipped #####
The current upstream docs changed two common assumptions in 1.29.7: proxy_http_version now defaults to 1.1, and the upstream keepalive cache is enabled by default with keepalive 32 local. Older configs often still include explicit proxy_http_version 1.1; and proxy_set_header Connection ""; lines for HTTP upstream keepalive, which remain valid for older builds or deliberate overrides.
Related: How to configure Nginx as a reverse proxy
Related: How to enable keepalive in Nginx
$ curl -I --silent -H 'Accept-Encoding: gzip' http://127.0.0.1/ | grep -i -E '^(HTTP/|content-encoding:)' HTTP/1.1 200 OK Content-Encoding: gzip
Gzip or Brotli helps most for HTML, CSS, JavaScript, JSON, XML, and similar text responses. Do not spend CPU recompressing already-compressed assets such as JPEG, PNG, WebP, MP4, or ZIP files.
server {
listen 443 ssl;
http2 on;
server_name host.example.net;
##### snipped #####
}
The dedicated http2 on; directive was added in 1.25.1. On older builds, keep the older listen 443 ssl http2; form instead.
HTTP/2 over TLS requires ALPN support in the TLS library, which the upstream docs note is available starting with OpenSSL 1.0.2.
Related: How to enable HTTP/2 in Nginx
$ curl -I --silent --http2 https://host.example.net/ | sed -n '1,3p' HTTP/2 200 content-type: text/html vary: Accept-Encoding
If the response still shows HTTP/1.1, the request is not reaching an HTTP/2 listener or the TLS stack is missing the required ALPN support.
Related: How to enable HTTP/2 in Nginx
server {
listen 443 quic reuseport;
listen 443 ssl;
http3 on;
add_header Alt-Svc 'h3=":443"; ma=86400';
}
The current upstream HTTP/3 module is not built by default, requires OpenSSL 1.1.1 or newer, and is still marked experimental. It is worth testing only when the build, the TLS library, and the network path all support QUIC end-to-end.
Related: How to enable HTTP/3 in Nginx
ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m;
The upstream Nginx docs still list ssl_session_cache none as the default. A shared cache is more efficient than the built-in per-worker cache and one megabyte stores roughly 4000 sessions.
open_file_cache max=10000 inactive=30s; open_file_cache_valid 1m; open_file_cache_min_uses 2; open_file_cache_errors on;
This tuning helps when workers repeatedly open the same local files or directories, because Nginx can cache descriptors and metadata instead of asking the kernel for them on every request.
If files are updated outside the normal deploy path, keep open_file_cache_valid short enough that stale metadata does not linger longer than expected.
Never place authenticated, personalized, or token-bearing responses into a shared cache without deliberate exclusions. Even sub-second microcache windows can break per-user dashboards, CSRF token flows, and highly dynamic APIs.
Related: How to enable caching in Nginx
Related: How to enable microcaching in Nginx
Buffered logging, shorter formats, or exact-path exclusions can help on genuinely noisy endpoints, but broad access_log off; rules remove useful incident and troubleshooting data. Keep the change narrow and measurable.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful $ sudo systemctl reload nginx $ wrk -t2 -c20 -d5s http://host.example.net/ ##### snipped #####
A successful tuning change improves latency, throughput, or queue pressure without new errors, worker exhaustion, or upstream instability. If the benchmark improves but error logs show resets, timeouts, or backend failures, roll back the last change and test again.
Related: How to test Nginx configuration
Related: How to manage the Nginx service
Related: How to benchmark Nginx with wrk