Checking system uptime shows how long the current Linux session has been running since the last boot, which helps confirm maintenance reboots, place incidents on a timeline, and judge whether long-lived state may still be affecting the host.
On Linux, uptime reads the running kernel state and prints the current time, the elapsed uptime, the logged-in user count, and the 1, 5, and 15-minute load averages in one line. The same uptime counter is also exposed through /proc/uptime, while uptime -s converts it into the timestamp when the current boot started.
The uptime value is not the same as load average, and the second number in /proc/uptime is cumulative CPU idle time, so it can be much larger than the first value on multi-core systems. In shared-kernel containers, these commands usually report the host kernel uptime rather than the container start time, so treat container output accordingly.
$ uptime 12:11:39 up 2 days, 19:00, 0 user, load average: 1.03, 1.28, 1.56
The trailing three values are the 1, 5, and 15-minute load averages, not extra uptime fields.
$ uptime -p up 2 days, 19 hours, 0 minutes
–pretty is the long option for the same output.
$ cat /proc/uptime 241202.49 2341550.57
The first field is uptime in seconds, including time spent suspended, and the second field is cumulative CPU idle time across all CPUs, so it can exceed the uptime value on multi-core systems.
$ uptime -s 2026-04-11 17:11:37
Related: How to check last boot time in Linux for the last recorded boot entry and retained reboot history.