Microcaching in Nginx keeps a tiny, short-lived cache of dynamic responses so burst traffic gets absorbed at the proxy instead of hammering the upstream application. A 1–10 second cache window is often enough to collapse thousands of identical requests into a handful of upstream hits, improving latency and protecting databases during load spikes. Enabling microcaching in Nginx is the focus.

The proxy_cache subsystem stores cached response metadata in a shared-memory zone (keys_zone) and stores response bodies on disk in a cache directory. Each response is indexed by a cache key (commonly scheme + host + URI), and Nginx decides whether to serve from cache or fetch from upstream based on cache state, TTL, and bypass rules.

Microcaching is safe only for content that is identical for multiple clients (typically anonymous GET and HEAD requests). Requests with cookies or authentication headers should bypass the cache to prevent leaking personalized content, and upstream responses that set cookies should be treated carefully. Overriding upstream caching headers can make microcaching effective for dynamic apps, but it also means Nginx may cache responses the application marked as non-cacheable.

Steps to enable microcaching in Nginx:

  1. Extract the Nginx worker user from /etc/nginx/nginx.conf.
    $ sudo awk '$1=="user"{print $2}' /etc/nginx/nginx.conf | tr -d ';'
    www-data

    Packaged Nginx commonly runs workers as www-data (Debian/Ubuntu) or nginx (RHEL/openSUSE).

  2. Create the on-disk microcache directory with ownership matching the worker user.
    $ sudo install --directory --owner=www-data --group=www-data --mode=0750 /var/cache/nginx/microcache

    Replace www-data with the value returned by the previous step.

  3. Create a cache zone definition in /etc/nginx/conf.d/microcache.conf.
    $ sudo tee /etc/nginx/conf.d/microcache.conf >/dev/null <<'EOF'
    proxy_cache_path /var/cache/nginx/microcache levels=1:2 keys_zone=microcache:20m max_size=1g inactive=60m use_temp_path=off;
    EOF

    keys_zone=… controls metadata memory, max_size caps disk usage, and inactive drops cold entries even if TTLs are longer.

  4. Insert microcaching directives into the proxied location block.
    location / {
        proxy_pass http://127.0.0.1:8080;
    
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    
        set $skip_cache 0;
    
        if ($request_method !~ ^(GET|HEAD)$) {
            set $skip_cache 1;
        }
    
        if ($http_authorization != "") {
            set $skip_cache 1;
        }
    
        if ($http_cookie != "") {
            set $skip_cache 1;
        }
    
        proxy_cache microcache;
        proxy_cache_key "$scheme$host$request_uri";
        proxy_cache_methods GET HEAD;
        proxy_cache_valid 200 301 302 10s;
        proxy_cache_valid 404 1s;
    
        proxy_cache_lock on;
        proxy_cache_lock_timeout 5s;
        proxy_cache_lock_age 10s;
    
        proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
    
        proxy_ignore_headers Cache-Control Expires;
        proxy_cache_bypass $skip_cache;
        proxy_no_cache $skip_cache;
    
        add_header X-Cache-Status $upstream_cache_status always;
    }

    Do not microcache personalized content or authenticated responses, because cached responses can be served to the wrong client if bypass rules are incomplete.

  5. Test the Nginx configuration for syntax errors.
    $ sudo nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
  6. Reload Nginx to apply the changes.
    $ sudo systemctl reload nginx

    reload applies configuration changes without fully dropping active connections.

  7. Prime the microcache with a request to a cacheable URL.
    $ curl -sI http://127.0.0.1/ | grep -i '^X-Cache-Status:'
    X-Cache-Status: MISS

    MISS means the response came from upstream and was eligible to be stored.

  8. Repeat the request within the TTL to confirm the cached response is served.
    $ curl -sI http://127.0.0.1/ | grep -i '^X-Cache-Status:'
    X-Cache-Status: HIT

    HIT indicates the response was served from cache, and BYPASS indicates a skip condition (cookie, auth header, non-GET/HEAD) prevented caching.

  9. Confirm cached objects exist on disk in the cache directory.
    $ sudo find /var/cache/nginx/microcache -type f | head -n 5
    /var/cache/nginx/microcache/1/67/c14e39809f55f159475c12f9b06ba671