194

I've run into a problem on one of my servers running 16.04: there is no disk space left.

I have no idea what is taking up the space. Is there a command to list the current directory sizes, so I can traverse and end up in the directory taking up all the space?

terdon
  • 100,812
  • 2
    Check the disk usage analyser – Pranal Narayan May 04 '17 at 15:24
  • 1
    @PranalNarayan No GUI as it's on my server I'm afraid :( – Karl Morrison May 04 '17 at 15:26
  • 1
    Darn you, now I went looking, found this https://bugs.launchpad.net/ubuntu/+source/baobab/+bug/942255 and wish it was a thing. – Sam May 04 '17 at 21:52
  • 1
    wrt "no GUI, is a server": you could install the GUI app (assuming you are happy with it and the support libraries being on a server) and use is on your local screen via X11-tunnelled-through-SSH with something like export DISPLAY=:0.0; ssh -Y <user>@<server> filelight (replace filelight with your preferred tool). Of course with absolutely no space left, if you don't already have the tool installed you'll need to use something else anyway! – David Spillett May 05 '17 at 10:15
  • 11
    @DavidSpillett As stated, *there is no space left on the server*. So I can't install anything. – Karl Morrison May 06 '17 at 09:09
  • @KarlMorrison As I also said in that comment. But I was pointing out that if the server did happen to have the tools and libs present, being a server with no direct GUI access need not be a barrier to using them. Even if the server is not local X remoting over SSH works well for many tools (though some respond rather badly over higher latency links so depending on how remote you and the server are with respect to each other, and the tool in question, YMMV). – David Spillett May 07 '17 at 09:28
  • Even if you have no space you could delete something unnecessary and install a tool – Viktor Mellgren May 08 '17 at 09:04
  • @ViktorMellgren Indeed if you have junk on the server, mine is extremely slim though, which in this case made it a problem. However true as you say, in most cases you could delete something :) – Karl Morrison May 08 '17 at 09:38
  • @KarlMorrison Mind sharing what it turned out to be? :D –  May 10 '17 at 10:36
  • @MarkYisri Indeed I can! The command led me to the directory where Docker keeps it's images. A script I had was creating images which when built were dangling https://www.projectatomic.io/blog/2015/07/what-are-docker-none-none-images/. This lead them to just exist as my script removes a certain tag. Run this on a cronjob and walla, none images started taking up space until all space was taken. :) – Karl Morrison May 10 '17 at 10:41
  • Try this on root directory then drill down to directory you want du -sh * | sort -h – Dnyaneshwar Harer Jun 18 '19 at 11:53

14 Answers14

241

As always in Linux, there's more than one way to get the job done. However, if you need to do it from CLI, this is my preferred method:

I start by running this as root or with sudo:

du -cha --max-depth=1 / | grep -E "M|G"

The grep is to limit the returning lines to those which return with values in the Megabyte or Gigabyte range. If your disks are big enough, you could add |T as well to include Terabyte amounts. You may get some errors on /proc, /sys, and/or /dev since they are not real files on disk. However, it should still provide valid output for the rest of the directories in root. After you find the biggest ones you can then run the command inside of that directory in order to narrow your way down the culprit. So for example, if /var was the biggest you could do it like this next:

du -cha --max-depth=1 /var | grep -E "M|G"

That should lead you to the problem children!

Additional Considerations

While the above command will certainly do the trick, I had some constructive criticism in the comments below that pointed out some things you could also include.

  1. The grep I provided could result in the occasional "K" value being returned if the name of the directory or file has a capital G or M. If you absolutely don't want any of the K valued directories showing up you'd want to up your regex game to be more creative and complex. e.g. grep -E "^[0-9\.]*[MG]"
  2. If you know which drive is the issue and it has other mounted drives on top of it that you don't want to waste time including in your search, you could add the -x flag to your du command. Man page description of that flag:

      -x, --one-file-system
          skip directories on different file systems
    
  3. You can sort the output of the du command so that the highest value is at the bottom. Just append this to the end of the command: | sort -h

grg
  • 117
  • 6
TopHat
  • 3,971
  • 2
    This is exactly what I do. – Lightness Races in Orbit May 04 '17 at 22:58
  • 5
    Your grep returns any folders with the letters M or G in their names too, a creative regex should hit numbers with an optional dot+M|G, maybe "^[0-9]*[.]*[0-9]*[MG]" – Xen2050 May 05 '17 at 06:24
  • 4
    If you know it's one drive that's the issue, you can use the -x option to make du stay on that one drive (provided on the command-line). You can also pipe through sort -h to correctly sort the megabyte/gigabyte human-readable values. I would usually leave off the --max-depth option and just search the entire drive this way, sorting appropriately to get the biggest things at the bottom. – Muzer May 05 '17 at 12:58
  • @Muzer, why mess with -x? for this? Just give the mount point of the drive as the argument to du. – alexis May 05 '17 at 13:20
  • 1
    @alexis My experience is that I sometimes end up with other rubbish mounted below the mountpoint in which I'm interested (especially if that is /), and using -x gives me a guarantee I won't be miscounting things. If your / is full and you have a separately-mounted /home or whatever, using -x is pretty much a necessity to get rid of the irrelevant stuff. So I find it's just easier to use it all the time, just in case. – Muzer May 05 '17 at 13:22
  • Good point, hadn't thought about / in particular... – alexis May 05 '17 at 13:23
  • Really appreciate the input everyone. :) @Xen2050 A more creative/complex regex would remove the possibility of folders I'm not trying to include in my output. However, the only other output that I haven't included is those directories with a K amount of data. It really doesn't effect the user if one shows up. And it's much easier to remember a simple and short regex than a long one. (also quicker to write). If you were scripting though and you truly couldn't have K values showing up, then yours would be better. – TopHat May 05 '17 at 15:17
  • @Muzer You make some really good points about how you could further breakdown and organize the disk usage, I'll see about mentioning them in my answer. In regards to removing --max-depth I'm not sure I'm totally on-board yet. You force yourself to wait on the system to count disk usage for every directory on the entire system. That's a lot of extra overhead when you could just see for yourself which directory to traverse in a single max-depth and not have to dig down any of the other directories. – TopHat May 05 '17 at 15:23
  • @TopHat Doesn't it already count disk usage for every directory? Or have I misunderstood? Since directory size isn't stored in Linux, it would have to be calculated whether or not you're asking du to bother to display it. The only way to calculate it is to add up the sizes for every file and subdirectory, and the only way to calculate those is to add up those sizes, etc. – Muzer May 05 '17 at 16:06
  • @Muzer it has to recalculate values for each directory and file separately when it goes through everything, in order to get the correct total for each directory and file. It only has to do it once for the one directory with a max-depth. That's why if you don't add a max-depth it takes significantly longer for it to return the results. – TopHat May 05 '17 at 16:10
  • You might want to add -xdev to find's options. This stops it from crossing into other mounted partitions. – CSM May 07 '17 at 20:49
  • 1
    If you have the sort you don't need the grep. – OrangeDog May 08 '17 at 10:21
  • instead of du -cha --max-depth=1 / you can simply use du -cha /*/ – phuclv May 27 '18 at 10:38
  • this command took my machine 45 years to complete – Trevor Hickey Mar 17 '19 at 17:32
  • if you sort -hr it will give you the larger results first, if you like that better. I also do the sort first and then the grep so it still highlights the "G" or the "M". – JohnRDOrazio Apr 05 '20 at 06:39
  • brilliant. uncovered 20GB of mysql binary logs – camslice Mar 14 '24 at 18:36
159

You can use ncdu for this. It works very well.

sudo apt install ncdu

enter image description here

Duncan
  • 2,609
  • 2
  • 15
  • 17
  • 61
    I'm kicking myself as I actually normally use this program, however since there is no space left I can't install it haha – Karl Morrison May 04 '17 at 15:29
  • 1
    @KarlMorrison i see several possible solutions, just mount it over sshfs on another computer and run ncdu there (assuming you already have an ssh server on it..) - or if you don't have an ssh server on it, you can do the reverse, install ncdu on another server and mount that with sshfs and run ncdu from the mount (assuming you already have sshfs on the server) - or if you don't have either, ... if ncdu is a single script, you can just curl http://path/to/ncdu | sh , and it will run in an in-memory IO stdin cache, but that'll require some luck. there's probably a way to make a ram-disk too – hanshenrik May 04 '17 at 20:09
  • @KarlMorrison or you can boot a live image of Linux and install it in there. –  May 10 '17 at 10:31
  • 3
    once installed type sudo ncdu / from the command line. sudo because if you dont put sudo, it wont report sizes for folders owned by root, and / because if you dont type that it will only report recusively down from the folder you're in – Max Carroll Sep 18 '19 at 08:38
  • 1
    ncdu is a must have for ubuntu users. – Ciasto piekarz Nov 01 '20 at 12:34
  • This works like a charm – uzaysan Jan 03 '21 at 09:22
  • Thank you so much! That's it! – nlavr Jan 04 '22 at 22:22
  • As mentioned above ncdu supports a -x flag which you'll probably want to pass in order to exclude other mounted filesystems under /. – dimo414 Aug 22 '23 at 22:33
33

I use this command:

sudo du -aBM -d 1 . | sort -nr | head -20

Occasionally, I need to run it from the / directory, as I've placed something in an odd location.

Charles Green
  • 21,339
  • Giving you a +1 for it working! However TopHats solution actually read my drive quicker! – Karl Morrison May 04 '17 at 15:47
  • I often find it more useful to do this without the -d 1 switch (and usually with less instead of head -20), so that I get a complete recursively enumerated list of all files and directories sorted by the space they consume. That way, if I see a directory taking up a lot of space, I can just scroll down to see if most of the space is actually taken up by some specific file or subdirectory in it. It's a good way to find some unneeded files and directories to delete to free some space: just scroll down until you see something you're sure you don't want to keep, delete it and repeat. – Ilmari Karonen May 05 '17 at 19:16
  • @KarlMorrison it doesn't read it quicker, it's just that sort waits for the output to be completed before beginning output. – muru May 29 '17 at 05:34
  • @muru Ah alright. I however get information quicker so that I can begin traversing quicker if that's a better term! – Karl Morrison May 29 '17 at 08:19
18

There are already many good answers about ways to find which directories take most of the space. If you have reason to believe that a few large files are the main problem, rather than many small ones, you could use something like:

find / -size +10M
Luca Citi
  • 309
16

In case you are also interested in not using a command, here's an app: Filelight

It lets you quickly visualize what's using disk space in any folder.

enter image description here

Gabriel
  • 2,453
  • 7
  • 32
  • 52
13

I don't know Ubuntu and can't check my answer but post here my answer based on my experience as unix admin long time ago.

  1. Find out which filesystem runs out of space

    df -h
    

    will list all filesystem, their size and their free space. You only waste time if you investigate filesystems that have enough space. Assume that the full filesystem is /myfilesystem. check the df output if there are filesystems mounted on subdirs of /myfilesystems. If so, the following speps must be adapted to this situation.

  2. Find out how much space is used by the files of this filesystem

    du -sh /myfilesystem
    

    The -x option may be used to guarantee that only the files that are member of this filesystems are taken into account. Some Unix variants (e.g. Solaris) do not know the -x option for du. Then you have to use some workarounds to find the du of your filesystem.

  3. Now check if the du of the visible files is approximately the size of the used space displayed by df. If so, you can start to find the large files/directories of the /myfilesystem filesystem to clean up.

  4. to find the largest subdirectories of a directory /.../dir use

    du -sk /.../dir/*|sort -n
    

    the -k option forces du to output the sie in kilobyte without any unit. This may be the default on some systems. Then you can omit this option. The largest files/subdirectories will be shown at the bottom of the output.

  5. If you have found a large file/directory that you don't need anymore you can remove it in an appropriate way. Don't bother about the small directories on the top of the output. It won't solve your problem if you delete them. If you still haven't enough space than you can repeat step 4 in the larges subdirectories which are displayed at the bottom of the list.

But what happened if the du output is not approximately the available space displayed by df?

If the du output is larger then you have missed a subdirectory where another filesystem is mounted. If the du output is much smaller, then som files are not shown in any directory tha du inspects. There can be different reasons for his phenomena.

  1. some processes are using a file that was already deleted. Therefore this files were removed from the directory and du can't see them. But for the filesystem their blocks are still in use until the proceses close the files. You can try to find out the relevant processes (e.g. with lsof) and force them to close this files (e.g by stopping the application or by killing the processes). Or you simply reboot your machine.

  2. there are files in directories that aren't visible anymore because on one of their parent directories another filesystem is mounted. So if you have a file /myfilesysem/subdir/bigfile and now mount another filesystem on /myfilesystem/subdir then you cannot see this file anymore and

    du -shx /myfilesystem 
    

    will report a value that does not contain the size of /myfilesystem/subdir/bigfile. The only way to find out if such files exist is to unmount /myfilesystem/subir and check with

    ls -la /myfilesystem/subdir 
    

    if it contains files.

  3. There may be special types of filesystems that use/reserve space on a disk that is not visible to the ls command. You need special tools to display this.

Besides this systematic way using the du command there are some other you can use. So you can use the find command to find files that are larger then some value you supply, you can search for files that larger than some value you supply or that were newly created or have a special name (e.g. *.log, core, *.trc). But you always should do a df as described in 1 so that you work on the right filesystem

miracle173
  • 230
  • 1
  • 6
  • On a busy server you cannot always unmount things. But you can bind mount the upper directory to a temporary location and it will not include the other mounts and will allow access to the hidden files. – Zan Lynx May 07 '17 at 18:25
  • Before systemd I often had mount failures result in filling the / mount with trash. Writing a backup to /mnt/backup without the USB drive connected for example. Now I make sure those job units have mount requirements. – Zan Lynx May 07 '17 at 18:30
  • @ZanLynx Thank you, I never heard of bind mounts before – miracle173 May 08 '17 at 11:01
  • @ZanLynx: Not just on busy servers. Imagine that you have /tmp on a separate file system (e. g. a tmpfs) and something created files in /tmp before it became a mount point to a different file system. Now these files are sitting in the root file system, shadowed by a mount point and you can't access them without a reboot to recovery mode (which doesn't process /etc/fstab) or, like you suggest, a bind-mount. – David Foerster Jun 03 '17 at 16:58
5

I often use this one

du -sh /*/

Then if I find some big folders I'll switch to it and do further investigation

cd big_dir
du -sh */

If needed you can also make it sort automatically with

du -s /*/ | sort -n
phuclv
  • 628
  • 1
  • 8
  • 38
4

Try sudo apt-get autoremove to remove the unused files if you haven't done so

Charles Green
  • 21,339
2

Not really an answer - but an addendum.

You're hard out of space and can't install ncdu from @erman 's answer.

Some suggestions

  • sudo apt clean all to delete packages you have already downloaded. SAFE
  • sudo rm -f /var/log/*gz purge log files older than a week or two - will not delete newer/current logs. MOSTLY SAFE
  • sudo lsof | grep deleted list all open files, but filter down to the ones which have been deleted from disk. FAIRLY SAFE
  • sudo rm /tmp/* delete some temp files - if something's using them you could upset a process. NOT REALLY THAT SAFE

That `lsof one may return lines like this:

server456 ~ $ lsof | grep deleted
init          1          root    9r      REG              253,0  10406312       3104 /var/lib/sss/mc/initgro                        ups (deleted)
salt-mini  4532          root    0r      REG              253,0        17     393614 /tmp/sh-thd-1492991421                         (deleted)

Can't do much for the init line, but the second line suggest salt-minion has a file open which was deleted, and the disk blocks will be returned once all the file handles are closed by a service restart.

Other common suspects here would include syslog / rsyslog / syslog-ng, squid, apache, or any process your server runs which is "heavy ".

phuclv
  • 628
  • 1
  • 8
  • 38
Criggie
  • 681
2

I find particularly valuable the output of tools like Filelight, but, as in your case, on servers normally there's no GUI installed, but the du command is always available.

What I normally do is:

  • write the du output to a file (du / > du_output.txt);
  • copy the file on my machine;
  • use DuFS to "mount" the du output in a temporary directory; DuFS uses FUSE to create a virtual filesystem (= no files are actually created, it's all fake) according to the du output;
  • run Filelight or another GUI tool on this temporary directory.

Disclaimer: I wrote dufs - exactly because I often have to find out what hogs disk space on headless machines.

Matteo Italia
  • 163
  • 1
  • 1
  • 11
0

For myself it is important to remove the directory /mnt from the calculations (and save time too). Because my /mnt contains other partitions without exclusion the result is:

$time sudo du -cha --max-depth=1 /
  (... SNIP ...)

du: cannot access '/proc/27561/fd/3': No such file or directory
du: cannot access '/proc/27561/fdinfo/3': No such file or directory
270G    /
270G    total

real    2m21.540s

With /mnt exclusion and suppressing error messages:

$time sudo du -cha --max-depth=1  --exclude=/mnt / 2>/dev/null
  (... SNIP ...)

13M     /sbin
1.8M    /run
26G     /
26G     total

real    0m25.019s
  • 2>/dev/null sends error messages to the bit-bucket.
  • 2 minutes is saved by excluding 246G of Windows and other Ubuntu distributions.
  • Accurate / total of 26 GB is now displayed.
  • Other users may need to exclude /media or /run/user/1000 directories.
0

Go to the folder you want to check, and use:

for i in `echo *`; do echo $i && du -sh $i; done;,

the command prints the file name and the size it takes on the disk in a readable format.

If you want to check folders only, replace 'echo *' with 'echo */'.

0

No programs needed, just find to search for files bigger than 50M and sort them by size.

find / -size +50M -type f -exec du -h {} ; | sort -h

kimy82
  • 131
-1

Similar to @TopHat, but filters some files if they have M, G, or T in the name. I don't believe it will miss size in the first column, but it won't match the filename unless you name files creatively.

du -chad 1 . | grep -E '[0-9]M[[:blank:]]|[0-9]G[[:blank:]]|[0-9]T[[:blank:]]'

Command line switches explained here since I didn't know what the c or a did.