Low disk space on Linux can stop services, break package upgrades, and cause log writes to fail with No space left on device errors.
A useful investigation starts by confirming which filesystem is actually full, since a “full disk” report usually refers to a specific mount point (for example /var/, /home/, or a dedicated data volume) backed by a particular block device. The combination of df (filesystem totals), findmnt (mount mapping), and lsblk (device view) makes it clear where to focus deeper scans.
The failure mode is not always block storage exhaustion; inode exhaustion (too many small files), deleted-but-open files, snapshots, or aggressive logging can consume space in ways that are not obvious from directory listings. Large du and find scans can be slow on busy systems, so staying on the target filesystem (-x/-xdev) and working from the mount point keeps results accurate and limits collateral I/O.
Related: How to check disk space and usage in Linux
Related: How to free disk space on Linux
$ df -hPT /mnt/uuiddemo Filesystem Type Size Used Avail Use% Mounted on /dev/loop1 ext4 224M 121M 86M 59% /mnt/uuiddemo $ df -iP /mnt/uuiddemo Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop1 65536 17 65519 1% /mnt/uuiddemo $ findmnt -T /mnt/uuiddemo TARGET SOURCE FSTYPE OPTIONS /mnt/uuiddemo /dev/loop1 ext4 rw,relatime $ lsblk -f /dev/loop1 NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS loop1 85.8M 54% /mnt/uuiddemo
Either block usage (Use) or inode usage (IUse) reaching 100% can trigger No space left on device.
$ sudo du -xhd1 /mnt/uuiddemo 2>/dev/null | sort -h 16K /mnt/uuiddemo/lost+found 21M /mnt/uuiddemo/logs 41M /mnt/uuiddemo/backups 61M /mnt/uuiddemo/cache 121M /mnt/uuiddemo
Use du -x to stay on the same filesystem and avoid counting other mounts (bind mounts, NFS mounts, or /proc/).
$ sudo find /mnt/uuiddemo -xdev -type f -size +10M -exec ls -lh {} \; 2>/dev/null
-rw-r--r-- 1 root root 60M Jan 13 20:34 /mnt/uuiddemo/cache/objects.bin
-rw-r--r-- 1 root root 20M Jan 13 20:34 /mnt/uuiddemo/logs/app.log
-rw-r--r-- 1 root root 40M Jan 13 20:34 /mnt/uuiddemo/backups/db-2026-01-01.sql.gz
For sparse files, ls -lh shows apparent size while du -h /path/to/file shows blocks actually consumed.
$ sudo journalctl --disk-usage Archived and active journals take up 10.0M in the file system. $ sudo grep -E '^(Storage|SystemMaxUse|SystemKeepFree|RuntimeMaxUse|RuntimeKeepFree)=' /etc/systemd/journald.conf 2>/dev/null
Reducing journal retention can free space quickly but may remove incident history needed for troubleshooting and forensics.
$ sudo lsof +L1 | head -n 5 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME sh 12719 root 3w REG 7,1 0 0 16 /mnt/uuiddemo/logs/held.log (deleted) sleep 12720 root 3w REG 7,1 0 0 16 /mnt/uuiddemo/logs/held.log (deleted)
Space held by a (deleted) file is released only after the owning process closes the file, typically via a service restart or reload.
$ sudo du -xhd1 /var/log 2>/dev/null | sort -h 4.0K /var/log/private 4.0K /var/log/sysstat 8.0K /var/log/journal 140K /var/log/apt 920K /var/log
Target recently changed large logs with find /var/log -type f -mtime -1 -size +100M to catch runaway debug output.