A custom systemd slice groups related services or transient workloads under one cgroup branch so shared CPU, memory, and task limits can be applied in one place. That is useful when queue workers, batch jobs, or supporting daemons should compete inside the same resource budget instead of each unit carrying its own separate limits.
A *.slice unit creates a node in the Linux control-group tree, and services or scopes join it through the Slice= setting or through a transient launch such as systemd-run –slice=…. Current upstream systemd.slice documentation describes slice names as dash-separated paths in that hierarchy, so extra dashes create parent slices instead of acting as decorative separators.
Examples below use a top-level slice named workload.slice with CPUWeight=, MemoryMax=, and TasksMax= so the flow stays readable and safe to test. The commands target the system manager under /etc/systemd/system; for per-user slices, use /~/.config/systemd/user and systemctl --user instead. The slice intentionally stays static with no Install section because current upstream systemd.special guidance says adding custom slices to slices.target is usually unnecessary unless the slice itself must always be active after boot.
A simple name such as workload.slice creates one top-level custom slice. Each extra dash adds another level in the slice path, so apps-web.slice is nested below apps.slice rather than being a flat name.
[Unit] Description=Workload resource slice [Slice] CPUWeight=200 MemoryMax=500M TasksMax=64
CPUWeight= controls relative CPU scheduling weight, MemoryMax= sets a hard memory limit, and TasksMax= caps the number of tasks that units inside the slice may create. Adjust the values to match the real workload.
$ sudo systemd-analyze verify /etc/systemd/system/workload.slice
No output means systemd accepted the unit syntax. This is the quickest way to catch parser errors before the definition is loaded into the manager state.
$ sudo systemctl daemon-reload
This rereads the unit files from disk and rebuilds the dependency graph around the new slice unit.
$ systemctl show -p LoadState -p ActiveState -p SubState workload.slice LoadState=loaded ActiveState=inactive SubState=dead
inactive (dead) is normal here because the slice exists but no service or scope is using it yet.
A custom slice with no Install section typically appears as static in status output. That is normal for an on-demand slice that should activate only when a workload is placed in it.
$ sudo systemd-run --unit=slice-demo.service --slice=workload.slice sleep 300 Running as unit: slice-demo.service; invocation ID: dd8547b820d84a92b22a99bee68891d3
systemd-run creates a transient service so the slice can be verified without editing a production unit first.
Current upstream systemd.special guidance says the slice starts automatically when a unit using Slice= is activated, so a separate systemctl enable or always-active slices.target link is not required for this test flow.
$ systemctl show -p ActiveState -p SubState slice-demo.service ActiveState=active SubState=running $ systemctl show -p Slice slice-demo.service Slice=workload.slice
The second command is the direct proof that the transient service joined the custom slice instead of the default system.slice.
$ systemctl show -p CPUWeight -p MemoryMax -p TasksMax workload.slice
CPUWeight=200
MemoryMax=524288000
TasksMax=64
$ systemctl status --no-pager --full workload.slice | sed -n '1,12p'
● workload.slice - Workload resource slice
Loaded: loaded (/etc/systemd/system/workload.slice; static)
Active: active since Mon 2026-04-13 21:19:18 +08; 18s ago
Tasks: 1 (limit: 64)
Memory: 244.0K (max: 500.0M available: 499.7M peak: 460.0K)
CPU: 1ms
CGroup: /workload.slice
└─slice-demo.service
└─1636 /usr/bin/sleep 300
systemctl show returns the normalized property values, so MemoryMax prints bytes. The status view is easier for human inspection because it also shows the current task count, memory usage, and the child service tree.
$ sudo systemctl stop slice-demo.service
For a permanent assignment, set Slice=workload.slice in the real service unit or a drop-in override, then run sudo systemctl daemon-reload and restart that service so the new cgroup placement takes effect.