Unexpected log growth can fill a filesystem fast, turning routine warnings into application crashes, failed updates, and services that refuse to start. Checking log directory sizes isolates the biggest contributors before the disk reaches a critical level.
Most Linux services write under /var/log as plain-text logs that accumulate and rotate into numbered or compressed archives. Systems using systemd may also store binary journal files under /var/log/journal, so directory-level totals often reveal whether traditional daemon logs or the journal is consuming the most space.
Accessing all log paths typically requires root privileges, and recursive scans can take time on busy hosts with many rotated files. Treat size checks as read-only diagnostics and prefer fixing noisy services or adjusting rotation policies instead of deleting log data blindly.
$ sudo du -h --one-file-system --max-depth=1 /var/log | sort -h 4.0K /var/log/dist-upgrade 4.0K /var/log/landscape 4.0K /var/log/private 44K /var/log/unattended-upgrades 124K /var/log/sysstat 184K /var/log/apt 972K /var/log/installer 398M /var/log/journal 522M /var/log
The largest child directory is the best next drill-down target.
$ sudo find /var/log -xdev -type f -printf '%s %p\n' | sort -nr | head -n 10 | numfmt --to=iec-i --suffix=B --padding=7 12MiB /var/log/demo-10.log 12MiB /var/log/demo-09.log 12MiB /var/log/demo-08.log 12MiB /var/log/demo-07.log 12MiB /var/log/demo-06.log 12MiB /var/log/demo-05.log 12MiB /var/log/demo-04.log 12MiB /var/log/demo-03.log 12MiB /var/log/demo-02.log 12MiB /var/log/demo-01.log
$ sudo find /var/log -xdev -type f -size +100M -printf '%s %p\n' | sort -nr | head -n 20 | numfmt --to=iec-i --suffix=B --padding=7
Adjust +100M to match the available space and urgency.