Proxy caching in Nginx reduces upstream load and response time by serving repeat requests from local storage instead of sending every request back to the application server. It is most useful for anonymous pages and API responses that can be shared safely across many clients.
The cache is defined once with proxy_cache_path in the http context, which sets the disk location and allocates a shared memory zone for cache metadata. A proxied location then enables that zone with proxy_cache, while proxy_cache_valid supplies fallback TTL rules and $upstream_cache_status shows whether a request was a cache MISS, HIT, or intentional BYPASS.
Caching must stay aligned with how the upstream response varies. Responses with Set-Cookie are not cached by default, while pages that depend on Authorization, locale cookies, tenant headers, or similar per-request inputs need bypass rules or a more specific proxy_cache_key to avoid serving the wrong content. Examples below use a packaged Linux layout with /etc/nginx/conf.d/, /etc/nginx/nginx.conf/, and the nginx systemd unit; when systemd is not managing the daemon, reload with sudo nginx -s reload instead.
Related: How to improve Nginx performance
Related: How to configure Nginx as a reverse proxy
Steps to enable caching in Nginx:
- Create a cache directory that the Nginx worker user can write to.
$ sudo install -d -m 0750 -o www-data -g www-data /var/cache/nginx/proxy_cache
Use the worker account for the local package, which is commonly www-data on Debian and Ubuntu and nginx on Red Hat-family systems.
- Create /etc/nginx/conf.d/proxy-cache.conf with the cache zone definition. On packaged installs, files in /etc/nginx/conf.d/ are usually included from the main http block automatically.
proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=page_cache:10m max_size=1g inactive=60m use_temp_path=off;
keys_zone=page_cache:10m names the shared cache and reserves metadata memory, while inactive=60m removes cache files that go unused for an hour even if they have not reached their TTL yet.
If the local package does not include /etc/nginx/conf.d/*.conf from the http block, place proxy_cache_path directly inside the http { … } section of /etc/nginx/nginx.conf.
- Enable the cache in the proxied location block and add explicit bypass rules for authenticated or session-bound traffic.
location / { proxy_pass http://127.0.0.1:8080; proxy_cache page_cache; proxy_cache_valid 200 301 302 10m; proxy_cache_valid 404 1m; proxy_cache_bypass $http_authorization $cookie_session; proxy_no_cache $http_authorization $cookie_session; add_header X-Cache-Status $upstream_cache_status always; }Upstream X-Accel-Expires, Cache-Control, and Expires headers can override or disable the fallback TTLs set by proxy_cache_valid.
Responses that vary by request headers, locale cookies, or tenant selection still need a matching proxy_cache_key or cache bypass rules, even when the URL stays the same.
- Test the updated Nginx configuration before reloading the daemon.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Related: How to test Nginx configuration
- Reload Nginx so the new cache zone and location settings take effect.
$ sudo systemctl reload nginx
Use sudo nginx -s reload when the daemon is running without systemd, such as in some containers or hand-started test environments.
Related: How to manage the Nginx service
- Request the proxied URL twice and confirm that the second response is served from cache.
$ curl -sS -D - http://127.0.0.1/ -o /dev/null | grep -i '^X-Cache-Status:' X-Cache-Status: MISS $ curl -sS -D - http://127.0.0.1/ -o /dev/null | grep -i '^X-Cache-Status:' X-Cache-Status: HIT
A first-request MISS followed by HIT confirms that the location is storing and reusing cacheable upstream responses.
- Confirm that authenticated requests bypass the shared cache when bypass rules are present.
$ curl -sS -D - -H 'Authorization: Bearer example-token' http://127.0.0.1/ -o /dev/null | grep -i '^X-Cache-Status:' X-Cache-Status: BYPASS
BYPASS means Nginx skipped cache lookup for that request and forwarded it directly to the upstream.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
