16

df

 Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/vda1       30830588 22454332   6787120  77% /
none                   4        0         4   0% /sys/fs/cgroup
udev             1014124        4   1014120   1% /dev
tmpfs             204996      336    204660   1% /run
none                5120        0      5120   0% /run/lock
none             1024976        0   1024976   0% /run/shm
none              102400        0    102400   0% /run/user

That 77% was just 60% yesterday and it will fill up to 100% in a few days.

I've been monitoring filessizes for a while now:

sudo du -sch /*


9.6M    /bin
65M     /boot
224K    /build
4.0K    /dev
6.5M    /etc
111M    /home
0       /initrd.img
0       /initrd.img.old
483M    /lib
4.0K    /lib64
16K     /lost+found
8.0K    /media
4.0K    /mnt
4.0K    /opt
du: cannot access ‘/proc/21705/task/21705/fd/4’: No such file or directory
du: cannot access ‘/proc/21705/task/21705/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/21705/fd/4’: No such file or directory
du: cannot access ‘/proc/21705/fdinfo/4’: No such file or directory
0       /proc
21M     /root
336K    /run
12M     /sbin
8.0K    /srv
4.1G    /swapfile
0       /sys
4.0K    /tmp
1.1G    /usr
7.4G    /var
0       /vmlinuz
0       /vmlinuz.old
14G     total

It's been giving me (more or less) the same numbers every day. That 14G total is less than half the disk size. Where is the rest going?

My Linux knowledge does not go a lot deeper.

Is it possible for files to no show up here? Is it possible to have space allocated in any other way?

Kalle Richter
  • 6,180
  • 21
  • 70
  • 103
nizzle
  • 275
  • 1
    The 7.4 G for your /var strikes me as unusually large. I suspect a log file is filling up fast. – Jos Sep 24 '15 at 07:25
  • 4
    Any deleted files? What does lsof -b 2>/dev//null | grep deleted (output might be rather large, iteratively discard entries that seem ok) – muru Sep 24 '15 at 07:27
  • @muru yes a bunch of files show up that way. What does it mean? Where are they? Of how do I clean it? – nizzle Sep 24 '15 at 07:43
  • 2
    A reboot should clean up a lot of them. They're just files opened by various processes that were then deleted. It's normal to have some, but if one of them grew too large, you wouldn't have an easy way to spot it using du. – muru Sep 24 '15 at 07:47
  • @muru You solved it, thanks! Apache logs were getting deleted, but I guess those log files stay 'open' (?) There also were a very large number of PHP warnings being logged. If you make your suggestion into an answer I'll tag it as accepted. – nizzle Sep 24 '15 at 07:53
  • 1
    Note that you may want to make a second question involving what's wrong with your logrotate.conf as apache should be configured to close files when logration occurs etc. I say this because the reboot fixes the problem now but unless I'm missing something, your issue should recur periodically and having to reboot ever week is sadnessmaking. [I'd suggest if it does recurring, seeing if service httpd restart (or reload) again temporarily alleviates the issue – Foon Sep 25 '15 at 00:11

3 Answers3

29

If there's an invisible growth in disk space, a likely culprit would be deleted files. In Windows, if you try to delete a file opened by something, you get an error. In Linux, the file will be marked as deleted, but the data will be retained until the application lets go. In some cases, this can be used as a neat way to clean up after yourself - application crashes won't prevent temporary files from being cleaned.

To look at deleted, still-used files:

lsof -b 2>/dev/null | grep deleted

You may have a large number of deleted files - that in itself is not a problem. A single deleted file getting large is a problem.

A reboot should fix this, but if you don't want to reboot, check the applications involved (first column in lsof output) and restart or close reasonable looking ones.

If you ever see something like:

zsh   1724   muru   txt   REG   8,17   771448   1591515  /usr/bin/zsh (deleted)

Where the application and the deleted files are the same, that probably means the application was upgraded. You can ignore those as a source of large disk usage (but you should still restart the program so that bug-fixes apply).

Files in /dev/shm are shared memory objects and don't occupy much space on disk (an inode number at most, I think). They can also be safely ignored. Files named vteXXXXXX are log files from a VTE-based terminal emulator (like GNOME Terminal, Terminator, etc.). These could be large, if you have a terminal window open with lots (and I mean lots) of stuff being output.

muru
  • 197,895
  • 55
  • 485
  • 740
  • 1
    On OP's system, the entire /dev is a udev mount point, so nothing under there takes up any space on the main filesystem. Moreover, /dev/shm is normally implemented as a tmpfs anyway, which is also just a mount point, so the individual files under it don't even take up directory entry space. – Kevin Sep 24 '15 at 18:52
  • Thx! After findind the culprit, a "systemctl restart docker" tackled the issue – FabricioFCarv Oct 18 '22 at 00:33
3

To add to the excellent answer by muru :

  • df shows the size on the disk,
  • and du shows the total size of the files content.

Maybe what you don't see with du is the appearance of many, many small files... (look on the last column of df -i and see if the number of inodes (ie, of files) increases a lot overtime too)

If you happen to have, say, 1'000'000 (1 million) tiny 1-byte files, du will count that as 1'000'000 bytes total, let's say 1Mb (... purists, please don't cringe)

But on disk, each file is made of 2 things:

  • 1 inode (pointing to the file's data), and that inode can by itself be 16kb(!),
  • And each file's data (= the file's content) is put on disk blocks, and those blocks can't contain several file's data (usually...), so your 1 byte of data will occupy at least 1 block

Thus, a million files 1-byte files will occupy 1'000'000'000 * size_of_a_block total space for the data, plus 1'000'000'000 * size_of_an_inode of inode's size... That can amount to several Gb of disk usage for 1 million "1-byte" files.

If you have 1024-byte blocks, and another 256 bytes of inode size, your 1'000'000 files will be reported as roughly 1Mb by du, but will count as roughly 1.25Gb on disk (as seen by df) ! (or even 2Gb if each inode also has to be on 1 dedicated disk block... I don't know if that's the case)

  • 1
    Unless you explicitly use an option (-b or --apparent-size) that tells du to show the apparent size of a file, du will in fact always show the size on disk of the file (total number of blocks used times block size). This can, in fact, be either larger (the normal case) or smaller (in the case of sparse files) than the apparent size of the file. – Jonathan Callen Sep 25 '15 at 03:46
0

If /dev/vda1 is filling up, it might be caused by Jenkins or Docker (or etc) and you might have to use lsof command in order to clean logs and set it's size.

T.Todua
  • 551
  • 1
  • 4
  • 15