Managing a systemd timer controls when scheduled cleanup, backup, sync, and maintenance work is allowed to run on a Linux host. Checking the timer before pausing it, re-enabling it, or changing its schedule helps avoid missed runs and removes guesswork about whether the schedule is actually active.
A %.timer unit schedules activation of another unit, usually a %.service with the same basename unless Unit= points elsewhere. systemctl status shows the timer's loaded and runtime state plus the next trigger, while systemctl list-timers –all shows the next and previous run across the timers currently known to the manager.
Examples below use a generic cache-prune.timer unit name. Replace it with the real timer, add sudo for system timers, and use systemctl --user for per-user timers. When the timer file or a drop-in changes, run systemctl daemon-reload and then restart the timer, because systemctl reload is not applicable to timer units.
| Action | Example command |
|---|---|
| Show timer status | systemctl status –no-pager –full unit.timer |
| List next and previous runs | systemctl list-timers –all |
| Start a stopped timer now | systemctl start unit.timer |
| Stop future activations | systemctl stop unit.timer |
| Apply edited timer settings | systemctl daemon-reload && systemctl restart unit.timer |
| Enable and start at boot | systemctl enable –now unit.timer |
| Disable and stop now | systemctl disable –now unit.timer |
Steps to manage a systemd timer:
- Inspect the timer state before changing it.
$ systemctl status --no-pager --full cache-prune.timer ● cache-prune.timer - Cache Prune Schedule Loaded: loaded (/etc/systemd/system/cache-prune.timer; disabled; preset: enabled) Active: active (waiting) since Mon 2026-04-13 13:50:34 UTC; 8s ago Trigger: Mon 2026-04-13 13:52:34 UTC; 1min 51s left Triggers: ● cache-prune.serviceReplace cache-prune.timer with the actual timer unit. Loaded: shows the unit file path and whether it is enabled for boot, Active: shows whether the timer is currently waiting or dead, Trigger: shows the next scheduled run, and Triggers: shows the unit that will be activated.
- Review the next and previous runs from the manager view.
$ systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Mon 2026-04-13 13:52:34 UTC 1min 59s - - cache-prune.timer cache-prune.service Mon 2026-04-13 13:59:48 UTC 9min - - systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Mon 2026-04-13 19:20:33 UTC 5h 29min - - apt-daily.timer apt-daily.service ##### snipped #####
--all keeps inactive timers in the listing as well, where they appear with - in the time columns instead of disappearing from the output.
Current upstream systemd.timer behavior still defaults AccuracySec= to one minute unless the timer overrides it, so a timer may elapse slightly later than the literal schedule expression suggests.
For OnCalendar= timers, Persistent=true causes the service to run immediately on the next activation if one or more scheduled runs were missed while the timer was inactive.
- Start the timer when it should begin waiting immediately without changing boot-time enablement.
$ sudo systemctl start cache-prune.timer $ systemctl is-active cache-prune.timer active $ systemctl is-enabled cache-prune.timer disabled
start activates the timer now, but it does not create boot-time enablement symlinks. The separate disabled result confirms that the timer is waiting for its next run without being configured to start automatically at boot.
- Stop the timer when future activations should pause.
$ sudo systemctl stop cache-prune.timer $ systemctl status --no-pager --full cache-prune.timer ○ cache-prune.timer - Cache Prune Schedule Loaded: loaded (/etc/systemd/system/cache-prune.timer; disabled; preset: enabled) Active: inactive (dead) Trigger: n/a Triggers: ● cache-prune.serviceStopping a timer prevents future activations, but it does not stop a service instance that the timer already launched. Manage the triggered service separately if it is still running.
- Reload the manager and restart the timer after editing the timer file or an override.
$ sudo systemctl daemon-reload $ sudo systemctl restart cache-prune.timer $ systemctl status --no-pager --full cache-prune.timer ● cache-prune.timer - Cache Prune Schedule Loaded: loaded (/etc/systemd/system/cache-prune.timer; disabled; preset: enabled) Active: active (waiting) since Mon 2026-04-13 13:51:03 UTC; 3ms ago Trigger: Mon 2026-04-13 13:56:03 UTC; 4min 59s left Triggers: ● cache-prune.serviceThe new Trigger: time reflects the updated schedule after the manager rereads the unit definition and the timer is restarted.
Timer units do not support a live reload path. A command such as sudo systemctl reload cache-prune.timer fails with Failed to reload cache-prune.timer: Job type reload is not applicable for unit cache-prune.timer.
- Enable and start the timer immediately when it should begin waiting now and on later boots.
$ sudo systemctl enable --now cache-prune.timer Created symlink /etc/systemd/system/timers.target.wants/cache-prune.timer -> /etc/systemd/system/cache-prune.timer. $ systemctl is-enabled cache-prune.timer enabled $ systemctl is-active cache-prune.timer active
enable creates the install symlink for future boots, and --now adds the immediate start in the same command.
- Disable and stop the timer immediately when it should neither wait now nor start again at boot.
$ sudo systemctl disable --now cache-prune.timer Removed "/etc/systemd/system/timers.target.wants/cache-prune.timer". $ systemctl is-enabled cache-prune.timer disabled $ systemctl is-active cache-prune.timer inactive
When a timer has no [Install] section it is static, so it can still be started manually or by another dependency but cannot be enabled or disabled directly.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
