Tuning worker_processes controls how many request-handling workers Nginx keeps ready, which directly affects how evenly CPU time is used during traffic spikes, reverse-proxy bursts, and many short TLS requests. The right worker count keeps the event loop busy without turning every extra core into needless scheduler overhead.
The Nginx master process reads the configuration and manages the worker pool, while the workers handle connections in an event-driven loop. Upstream documentation still lists worker_processes 1; as the built-in default, but worker_processes auto; remains the standard starting point because it sizes the pool to the visible CPU count. Real concurrency still depends on worker_connections and the open-file limit.
Current Linux packages commonly already ship with worker_processes auto; in /etc/nginx/nginx.conf. Keep that on dedicated hosts unless the service must stay inside a smaller CPU quota, cpuset, or pinning policy, in which case an explicit value is safer. Test every change with nginx -t, reload only after a clean parse, and confirm the running worker count instead of assuming the config file and live process state still match.
Related: How to improve Nginx performance
Related: How to tune worker_connections in Nginx
Steps to tune worker_processes in Nginx:
- Check how many CPU cores are visible to the service.
$ nproc 8
Use an explicit worker count instead of auto when the host exposes more CPUs than Nginx is actually allowed to use through quotas, cpusets, or deliberate core pinning.
- Check the current worker_processes line in the main Nginx configuration file.
$ sudo grep -nE '^[[:space:]]*worker_processes([[:space:]]|$)' /etc/nginx/nginx.conf 2:worker_processes auto;
Examples below use the common Linux package layout with /etc/nginx/nginx.conf. Upstream docs note that other builds may keep the main file under paths such as /usr/local/nginx/conf/nginx.conf or /usr/local/etc/nginx/nginx.conf instead.
- Edit the main configuration file.
$ sudoedit /etc/nginx/nginx.conf
- Set worker_processes in the main context to the target value.
user www-data; worker_processes auto; pid /run/nginx.pid;
Keep auto as the default choice for most dedicated hosts because it tracks the visible CPU count automatically.
Replace it with an explicit value such as worker_processes 2; when the service must stay inside a smaller CPU budget than the host advertises.
- Check the matching connection and file-descriptor settings before raising the worker count aggressively.
$ sudo nginx -T 2>/dev/null | grep -E '^[[:space:]]*(worker_connections|worker_rlimit_nofile)([[:space:]]|$)' worker_connections 1024;The official Nginx docs note that each worker can open up to worker_connections sockets, but the real limit still cannot exceed the current open-files ceiling.
- Test the updated Nginx configuration for syntax errors.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
- Reload Nginx to apply the new worker count.
$ sudo systemctl reload nginx
If systemd is not managing the service, use sudo nginx -s reload instead. The upstream control docs note that signal-based reloads must be sent by the same user that started the master process.
- Confirm that the nginx unit stayed active after the reload.
$ sudo systemctl is-active nginx active
Related: How to manage the Nginx service
- Verify that the running worker pool matches the configured target.
$ sudo ps -C nginx -o cmd= nginx: master process nginx nginx: worker process nginx: worker process $ sudo ps -C nginx -o cmd= | grep -c '^nginx: worker process$' 2
The count should match the explicit number that was configured, or the CPU count that auto detects on the host.
- Confirm that Nginx still answers requests after the reload.
$ curl -I -sS http://127.0.0.1/ HTTP/1.1 200 OK Server: nginx/1.24.0 (Ubuntu) Date: Thu, 09 Apr 2026 13:45:04 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Thu, 09 Apr 2026 13:45:01 GMT Connection: keep-alive ETag: "69d7ad5d-267" Accept-Ranges: bytes
Use the site hostname or HTTPS URL when localhost on port 80 is not the listener that serves the real workload.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
