Find What's Eating Your Disk Space in Seconds
A full disk at 2am is every SRE's nightmare — these two commands prevent it.
What
The `du` (disk usage) and `df` (disk free) commands help you quickly identify what's consuming disk space. Combined with `sort` and `head`, you can find the biggest offenders in seconds instead of guessing.
Why It Matters
Disk space issues cause cascading failures — databases crash, logs stop writing, deployments fail. Knowing how to quickly find large files and directories is a critical troubleshooting skill that saves you during incidents.
Example
# Check overall disk usage
df -h
# Find top 10 largest directories from /
du -sh /* 2>/dev/null | sort -rh | head -10
# Find files larger than 100MB
find / -type f -size +100M 2>/dev/null
# Find top 10 largest files in current directory tree
find . -type f -exec du -sh {} + | sort -rh | head -10Common Mistake
Only checking df and seeing the disk is full, then randomly deleting files. This often leads to deleting something important or missing the actual cause (like runaway logs).
Quick Fix
Always drill down with du first to find WHICH directory is growing, then investigate. Common culprits: /var/log, /tmp, Docker images (/var/lib/docker), and old package caches.
Key Takeaways
- 1df -h shows overall disk usage
- 2du -sh /* shows per-directory size
- 3sort -rh | head -10 shows biggest first
- 4find -size +100M finds large files
- 5Common culprits: /var/log, /tmp, docker
Was this tip helpful?
Help us improve the DevOpsPath daily collection