Low disk space in Linux turns routine writes into failures. Package updates stop, databases and queues reject new data, and log-heavy services can trigger wider outages once the affected filesystem runs out of either free blocks or free inodes.
A reliable investigation starts by mapping the problem path to the filesystem that contains it. df shows block and inode pressure for that filesystem, findmnt identifies the mounted device behind the path, and then du, find, journalctl, and lsof narrow the problem to a heavy directory tree, oversized files, journal growth, or deleted files that are still being held open by a process.
Most of the commands below need sudo for complete results, and large scans are safest when they stay on the affected filesystem with du -x or find -xdev. The sample output follows a common /var investigation flow on a current systemd host, but the same sequence works for another path after replacing it consistently. On non-systemd hosts or minimal containers, skip the journal step and inspect the relevant log directory directly.
Steps to investigate low disk space in Linux:
- Confirm whether the problem is block usage, inode usage, or both for the path reporting low space.
Replace /var with the directory or mount reporting low space. Either the Use or IUse column reaching 100% can block writes or file creation.
$ df -hPT /var Filesystem Type Size Used Avail Use% Mounted on /dev/vda2 ext4 59G 44G 12G 80% / $ df -iP /var Filesystem Inodes IUsed IFree IUse% Mounted on /dev/vda2 3907584 614508 3293076 16% / $ findmnt -T /var -o TARGET,SOURCE,FSTYPE TARGET SOURCE FSTYPE / /dev/vda2 ext4
- Summarize the top-level directories on the affected filesystem before drilling deeper.
-x keeps the scan on the same filesystem, which avoids counting nested mounts under the target path.
$ sudo du -xhd1 /var 2>/dev/null | sort -hr | head -n 12 139M /var 77M /var/lib 32M /var/cache 19M /var/backups 13M /var/log 4.0K /var/tmp 4.0K /var/spool 4.0K /var/opt 4.0K /var/mail 4.0K /var/local
- Rerun the same scan on the heaviest subtree from the previous step.
Move down one level at a time until the result points to a directory that is small enough to inspect safely.
$ sudo du -xhd1 /var/lib 2>/dev/null | sort -hr | head -n 12 75M /var/lib 64M /var/lib/apt 7.1M /var/lib/demo-space 3.9M /var/lib/dpkg 44K /var/lib/systemd 28K /var/lib/pam 4.0K /var/lib/misc
- List the largest regular files on the same filesystem once the heavy directories are known.
Package indexes or caches may dominate on some systems, while application logs, container layers, backups, or VM images dominate on others. Add a filter such as -size +100M or rerun the command on a smaller subtree when the list is too noisy.
$ sudo find /var -xdev -type f -printf '%s %p\n' 2>/dev/null | sort -nr | head -n 8 | numfmt --field=1 --to=iec-i --suffix=B 30MiB /var/cache/demo-space/objects.bin 30MiB /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_noble_universe_binary-arm64_Packages.lz4 18MiB /var/backups/demo-space/db-2026-04-14.sql.gz 12MiB /var/log/demo-space/app.log 8.4MiB /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_noble-updates_restricted_binary-arm64_Packages.lz4 8.0MiB /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_noble-security_restricted_binary-arm64_Packages.lz4 7.0MiB /var/lib/demo-space/archive.db 4.1MiB /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_noble-updates_main_binary-arm64_Packages.lz4
- Check whether deleted files are still consuming blocks on the filesystem.
$ sudo lsof +L1 /var COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME sleep 3458 root 3w REG 0,78 8388608 0 696511 /var/log/demo-space/held.log (deleted)
Space from a (deleted) file is released only after the owning process closes the descriptor, usually through a service reload, restart, or clean process exit.
Some minimal images do not ship with lsof, but it remains the clearest way to confirm deleted-but-open files when it is available.
- If the pressure is under /var, compare the systemd journal with plain-text log growth.
$ sudo journalctl --disk-usage No journal files were found. Archived and active journals take up 0B in the file system. $ sudo du -xhd1 /var/log 2>/dev/null | sort -hr | head -n 10 13M /var/log/demo-space 13M /var/log 64K /var/log/apt 4.0K /var/log/private 4.0K /var/log/journal
A 0B or very small journal usually means little or nothing is being stored persistently in the systemd journal. On busier hosts, compare that total with the largest plain-text log directories to decide where log growth is coming from.
- Compare the visible directory total with the filesystem total before assuming the root cause is identified.
$ sudo du -shx /var 139M /var $ df -hPT /var Filesystem Type Size Used Avail Use% Mounted on /dev/vda2 ext4 59G 44G 12G 80% /
If df stays much higher than the visible tree, the missing space may be in deleted-but-open files, snapshots, reserved blocks, copy-on-write metadata, or usage outside the directory that was scanned. Keep investigating at the filesystem level instead of deleting random files.
Related: How to free disk space on Linux
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
