DoS-style abuse often shows up as request floods, aggressive scraping, or repeated authentication attempts that saturate upstream services and starve legitimate traffic. Rate limiting at the Nginx edge protects expensive endpoints by capping bursty clients before application workers become the bottleneck.
Request limiting uses limit_req_zone to store per-client state in shared memory and limit_req to enforce a steady rate with an optional burst allowance on specific locations. Connection limiting uses limit_conn_zone plus limit_conn to cap concurrent connections per key, which helps against slow connections that hold onto workers longer than expected.
Rates that are too low can block legitimate users (especially when many clients share an IP via NAT or mobile carriers), while rates that are too high barely change the outcome of an attack. When Nginx is behind a reverse proxy or load balancer, ensure the real client address is set before limiting so the limiter key does not collapse to the proxy IP; request limiting complements upstream DDoS mitigation rather than replacing it.
Related: How to secure Nginx web server
Related: How to block user agents in Nginx
Steps to prevent DoS abuse in Nginx:
- Define shared-memory zones for limit_req plus limit_conn inside the http block.
http { limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=10r/s; limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m; limit_req_status 429; limit_conn_status 429; ##### snipped ##### }When Nginx runs behind a proxy, configure the realip module so $binary_remote_addr reflects the client address before limits are applied.
- Apply limit_req to a sensitive endpoint such as a login route.
server { ##### snipped ##### location = /login { limit_req zone=req_per_ip burst=20 nodelay; } }Increase burst for legitimate spikes, or drop nodelay to smooth spikes instead of forwarding them upstream.
Overly strict limits can block real users from shared IPs and can look like an outage during traffic spikes.
- Apply limit_conn at the server level to cap concurrent connections per client address.
server { limit_conn conn_per_ip 20; ##### snipped ##### }Connection caps are most useful for long-lived connections (slowloris-style pressure, uploads, long polling), while limit_req controls request bursts.
- Test the configuration syntax for errors.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
- Reload Nginx to activate the new limits.
$ sudo systemctl reload nginx
- Send a short burst of requests to the limited endpoint to confirm 429 responses.
$ for i in $(seq 1 40); do curl -s -o /dev/null -w "%{http_code}\n" http://127.0.0.1/login; done | sort | uniq -c 20 200 20 429 - Review access logs for 429 entries to tune rates plus bursts.
$ sudo tail -n 50 /var/log/nginx/access.log | grep " 429 " 203.0.113.10 - - [14/Dec/2025:12:34:56 +0000] "GET /login HTTP/1.1" 429 169 "-" "curl/8.4.0" ##### snipped #####
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
