On busy Linux systems, a single runaway process can consume most available CPU time and degrade responsiveness for everything else. Constraining the CPU usage of individual processes keeps interactive workloads responsive and maintains predictable performance on shared servers. Applying explicit CPU limits is particularly useful for resource-hungry batch jobs, development builds, and untrusted workloads.

The Linux scheduler divides time slices among runnable tasks using priorities, niceness values, and control groups. User-space tools such as cpulimit adjust process priority dynamically to keep a process near a target percentage of a single core. Kernel features such as cgroups group processes under controllers like cpu, where per-group parameters determine how much CPU time tasks are allowed to consume.

Imposing CPU caps inevitably slows affected workloads and can cause timeouts in latency-sensitive applications if limits are too strict. The examples below assume a systemd-based Linux distribution such as Ubuntu with support for legacy cgroup v1 tools, and they use a synthetic yes workload solely for demonstration. Production processes should be limited gradually and monitored closely, and changes to cgroups require root access and familiarity with existing resource policies.

Steps to limit CPU usage using cpulimit:

  1. Open a terminal on the target Linux host with an account that can use sudo.
  2. Install the cpulimit package using the distribution package manager on Ubuntu.
    $ sudo apt update && sudo apt install --assume-yes cpulimit

    On openSUSE, use sudo zypper install cpulimit; on Fedora or RHEL, use sudo dnf install cpulimit.

  3. Start a CPU-intensive test process in the background.
    $ yes > /dev/null &

    The yes > /dev/null & command keeps one CPU core busy without producing visible output, which is useful for testing limits.

  4. Identify the process ID (PID) of the CPU-intensive test process.
    $ ps -o user,pid,pcpu,pmem,tty,time,cmd -C yes
    USER         PID %CPU %MEM TT           TIME CMD
    root       15999 99.0  0.0 ?        00:00:02 yes

    The PID in the first line corresponds to the yes process.

  5. Apply a CPU usage limit of 20% of a single core to the test process.
    $ sudo timeout 3s cpulimit --pid=15999 --limit=20 --verbose
    10 CPUs detected.
    Priority changed to -10
    Process 15999 detected
    
    %CPU	work quantum	sleep quantum	active rate
    22.18%	 34013 us	 65986 us	37.72%
    20.24%	 41253 us	 58746 us	41.75%
    30.24%	 31983 us	 68016 us	48.37%
    Exiting...

    Replace 15999 with the actual PID; the --limit 20 option caps usage at approximately 20% of one core.

  6. Monitor the limited process in top to confirm CPU usage changes.
    $ top -b -n 1 -p 15999
    top - 22:37:25 up 2 days,  9:28,  0 user,  load average: 0.59, 0.40, 0.26
    Tasks:   1 total,   1 running,   0 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  2.9 us,  6.8 sy,  0.0 ni, 90.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
    MiB Mem :  23745.0 total,  21559.9 free,   1299.4 used,   1213.3 buff/cache     
    MiB Swap:   1024.0 total,   1024.0 free,      0.0 used.  22445.6 avail Mem 
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    15999 root      20   0    2272   1164   1076 R 100.0   0.0   0:03.09 yes

    A stable value near the requested percentage indicates that the limit is active.

  7. Adjust the CPU limit if the process needs more or less CPU time.
    $ sudo timeout 3s cpulimit --pid=15999 --limit=50 --verbose
    10 CPUs detected.
    Priority changed to -10
    Process 15999 detected
    
    %CPU	work quantum	sleep quantum	active rate
    48.23%	 73174 us	 26826 us	70.58%
    42.55%	 86698 us	 13301 us	73.78%
    Exiting...

    Increasing the limit restores performance but can raise overall system load; changes should be applied in small increments on production systems.

  8. Terminate the test process after confirming that CPU limiting behaves as expected.
    $ kill 15999

    Leaving the synthetic yes workload running indefinitely can distort system load metrics and mask real performance issues.

Steps to limit CPU usage using systemd-run (cgroups v2):

  1. Confirm the host uses systemd and cgroup v2 so CPU quotas can be applied per scope.
  2. Start a constrained scope that runs a CPU-intensive test workload.
    $ sudo systemd-run --unit=cpulimited-demo --scope -p CPUQuota=20% /bin/sh -c 'yes > /dev/null'
    Failed to request invocation ID for unit: Unknown object '/org/freedesktop/systemd1/unit/self'.

    Some containerized environments restrict systemd-run invocation tracking; run this step on a full VM or host if the scope cannot be created.

  3. Confirm the scope PID and CPU quota settings.
    $ systemctl show -p MainPID -p CPUQuotaPerSecUSec cpulimited-demo.service
    MainPID=0
    CPUQuotaPerSecUSec=infinity
  4. Monitor the constrained workload with top to verify CPU usage.
    $ top -b -n 1
    top - 22:47:24 up 2 days,  9:38,  0 user,  load average: 0.14, 0.17, 0.20
    Tasks:   9 total,   1 running,   8 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  4.6 us,  0.9 sy,  0.0 ni, 94.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
    MiB Mem :  23745.0 total,  21541.5 free,   1316.9 used,   1214.1 buff/cache     
    MiB Swap:   1024.0 total,   1024.0 free,      0.0 used.  22428.1 avail Mem 
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
        1 root      20   0   21048  11892   8988 S   0.0   0.0   0:04.42 systemd
       23 root      19  -1   42740  13312  11720 S   0.0   0.1   0:00.53 systemd-j+
      171 message+  20   0    9748   4648   3844 S   0.0   0.0   0:01.77 dbus-daem+
    ##### snipped #####

    Use a longer workload or adjust the quota to see sustained CPU throttling in the sample output.

  5. Stop the scope when testing is complete.
    $ sudo systemctl stop cpulimited-demo.service
    Failed to stop cpulimited-demo.service: Unit cpulimited-demo.service not loaded.

    Leaving throttled workloads running can distort system load and lead to misleading performance diagnostics.