On busy Linux systems, a single runaway process can consume most available CPU time and degrade responsiveness for everything else. Constraining the CPU usage of individual processes keeps interactive workloads responsive and maintains predictable performance on shared servers. Applying explicit CPU limits is particularly useful for resource-hungry batch jobs, development builds, and untrusted workloads.
The Linux scheduler divides time slices among runnable tasks using priorities, niceness values, and control groups. User-space tools such as cpulimit adjust process priority dynamically to keep a process near a target percentage of a single core. Kernel features such as cgroups group processes under controllers like cpu, where per-group parameters determine how much CPU time tasks are allowed to consume.
Imposing CPU caps inevitably slows affected workloads and can cause timeouts in latency-sensitive applications if limits are too strict. The examples below assume a systemd-based Linux distribution such as Ubuntu with support for legacy cgroup v1 tools, and they use a synthetic yes workload solely for demonstration. Production processes should be limited gradually and monitored closely, and changes to cgroups require root access and familiarity with existing resource policies.
Method 1: cpulimit (simple percentage cap)
Method 2: cgroups (kernel-level control)
Steps to limit CPU usage using cpulimit:
- Open a terminal on the target Linux host with an account that can use sudo.
- Install the cpulimit package using the distribution package manager on Ubuntu.
$ sudo apt update && sudo apt install --assume-yes cpulimit
On openSUSE, use sudo zypper install cpulimit; on Fedora or RHEL, use sudo dnf install cpulimit.
- Start a CPU-intensive test process in the background.
$ yes > /dev/null &
The yes > /dev/null & command keeps one CPU core busy without producing visible output, which is useful for testing limits.
- Identify the process ID (PID) of the CPU-intensive test process.
$ ps aux | grep yes user 1234 99.0 0.0 7088 920 pts/0 R 10:22 0:15 yes user 1236 0.0 0.0 8900 828 pts/0 S+ 10:22 0:00 grep --color=auto yes
The PID in the first non-grep line corresponds to the yes process.
- Apply a CPU usage limit of 20% of a single core to the test process.
$ sudo cpulimit --pid 1234 --limit 20 cpulimit: limiting process 1234 ("yes") to 20% CPUReplace 1234 with the actual PID; the --limit 20 option caps usage at approximately 20% of one core.
- Monitor the limited process in top to confirm reduced CPU usage.
$ top -p 1234 top - 10:25:18 up 1:18, 1 user, load average: 0.21, 0.39, 0.36 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1234 user 20 0 7088 920 860 R 19.8 0.0 0:30.12 yes ##### snipped #####
A stable value near the requested percentage indicates that the limit is active.
- Adjust the CPU limit if the process needs more or less CPU time.
$ sudo cpulimit --pid 1234 --limit 50
Increasing the limit restores performance but can raise overall system load; changes should be applied in small increments on production systems.
- Terminate the test process after confirming that CPU limiting behaves as expected.
$ kill 1234
Leaving the synthetic yes workload running indefinitely can distort system load metrics and mask real performance issues.
Steps to limit CPU usage using cgroups:
- Confirm that the system supports the cgroup v1 CPU controller and that root access is available for managing cgroups.
- Install the cgroup-tools package on Ubuntu.
$ sudo apt update && sudo apt install --assume-yes cgroup-tools
On distributions without cgroup-tools, prefer systemd utilities such as systemd-run --scope -p CPUQuota=20 for CPU control.
- Create a new cgroup named cpulimitedgroup in the cpu controller.
$ sudo cgcreate -g cpu:/cpulimitedgroup
This command prepares a dedicated CPU control group that can hold one or more related processes.
- Start the same CPU-intensive yes test process in the background.
$ yes > /dev/null &
The background job provides a reproducible workload for observing how cgroups affect CPU scheduling.
- Find the PID of the yes test process.
$ ps aux | grep yes user 2345 99.0 0.0 7088 920 pts/0 R 10:32 0:22 yes user 2347 0.0 0.0 8900 828 pts/0 S+ 10:32 0:00 grep --color=auto yes
Record the PID from the first non-grep entry for use in subsequent commands.
- Move the test process into the new cgroup.
$ sudo cgclassify -g cpu:/cpulimitedgroup 2345
Multiple PIDs can be classified into the same cgroup to share the configured CPU budget.
- Reduce the CPU share of the cgroup relative to the default weight.
$ sudo cgset -r cpu.shares=512 cpulimitedgroup
The default CPU share is typically 1024, so a value of 512 halves the relative weight of this group compared to default processes; the exact CPU percentage depends on other groups and workloads.
- Inspect the current CPU-related settings for the cgroup to confirm the change.
$ sudo cgget -g cpu:/cpulimitedgroup cpulimitedgroup: cpu.shares: 512 ##### snipped #####
Additional parameters such as cpu.cfs_quota_us can enforce hard ceilings when stricter limits are required.
- Monitor the limited process with top to verify reduced CPU usage.
$ top -p 2345 top - 10:35:48 up 1:29, 1 user, load average: 0.35, 0.44, 0.40 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2345 user 20 0 7088 920 860 R 60.0 0.0 0:40.01 yes ##### snipped #####
Comparing this output with the unrestricted case demonstrates how CPU shares influence the scheduler.
- Stop the test workload after confirming that the cgroup settings behave as expected.
$ kill 2345
Setting extremely low shares or quotas on essential services can cause sluggish behavior and request timeouts, so CPU limits should be tuned carefully and documented.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
Comment anonymously. Login not required.
