1

This is my first question ever posted, so I hope I will provide what you need up front.

The quick question is: why is my 2TB drive showing almost completely full when all df -h, du, or sudo du -hsx commands show usage in the 10's of GB and nothing close to TERRABYTES?

Now for the details:

I run an ubuntu server (20.04) which is almost primarily used for docker containers for home media use.

lsb_release -d
Description:    Ubuntu 20.04 LTS

I have a 2TB drive installed, the bulk of which is in the root partition (other than a very small swap)

Additionally I have a CIFS NAS drive mounted to a folder on the local disk and an iSCSI LUN for backups of my network devices:

df -h -x{tmp,devtmp,squash}fs
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda3               1.8T  1.6T  143G  92% /
//192.168.1.103/Public   22T   11T   12T  47% /media/Media
/dev/sdb1                18T  1.6T   15T  10% /media/NASBackup

As for every mount, here's that output:

ray@ray-htpc:~/htpc-docker-standup$ df -h
Filesystem              Size  Used Avail Use% Mounted on
udev                    7.8G     0  7.8G   0% /dev
tmpfs                   1.6G  2.0M  1.6G   1% /run
/dev/sda3               1.8T  1.6T  146G  92% /
tmpfs                   7.8G     0  7.8G   0% /dev/shm
tmpfs                   5.0M  4.0K  5.0M   1% /run/lock
tmpfs                   7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/loop0              401M  401M     0 100% /snap/gnome-3-38-2004/112
/dev/loop9              128K  128K     0 100% /snap/bare/5
/dev/loop8               47M   47M     0 100% /snap/snapd/16010
/dev/loop7               55M   55M     0 100% /snap/snap-store/558
/dev/loop2               82M   82M     0 100% /snap/gtk-common-themes/1534
/dev/loop5               56M   56M     0 100% /snap/core18/2560
/dev/loop4               62M   62M     0 100% /snap/core20/1611
/dev/loop12              56M   56M     0 100% /snap/core18/2538
/dev/loop10              47M   47M     0 100% /snap/snapd/16292
/dev/loop1              241M  241M     0 100% /snap/gnome-3-34-1804/24
/dev/loop3               50M   50M     0 100% /snap/snap-store/433
/dev/loop11             347M  347M     0 100% /snap/gnome-3-38-2004/115
/dev/loop14              64M   64M     0 100% /snap/core20/1623
/dev/loop6              219M  219M     0 100% /snap/gnome-3-34-1804/77
/dev/loop13              92M   92M     0 100% /snap/gtk-common-themes/1535
//192.168.1.103/Public   22T   11T   12T  47% /media/Media
/dev/sdb1                18T  1.6T   15T  10% /media/NASBackup
tmpfs                   1.6G   24K  1.6G   1% /run/user/1000

This has been up and operating for a year and the disk usage of the root partition was in the 2-9 percent range for the longest time. I checked again last week, after seeing errors in one of my containers about disk space being almost full and I see 92% used.

However, if I look at my drive to try to learn where the usage is, I see nothing even remotely CLOSE to this level of use:

#pwd
/
/# sudo du -hsx * | sort -rh | head -n 40
du: cannot access 'proc/11181/task/11181/fd/4': No such file or directory
du: cannot access 'proc/11181/task/11181/fdinfo/4': No such file or directory
du: cannot access 'proc/11181/fd/3': No such file or directory
du: cannot access 'proc/11181/fdinfo/3': No such file or directory
18G     home
15G     var
6.4G    usr
144M    boot
12M     etc
2.0M    run
92K     root
88K     tmp
44K     snap
16K     opt
16K     lost+found
4.0K    srv
4.0K    mnt
4.0K    media
4.0K    cdrom
0       sys
0       sbin
0       proc
0       libx32
0       lib64
0       lib32
0       lib
0       dev
0       bin

When there's only a couple of 15G and 18G results, where is the almost 2 terabytes of space being used?

I'm the only user on the ubuntu server. There's no "/home/.local/share/Trash" folder hidden in there, either.

In an effort to clean up everything I completely purged all containers from the machine so it would prune all configs and stored files and that recovered 19G, total; a far cry from what it says is being used.

Anything, even a thin guess, would be greatly appreciated as I have been digging on this every day for a week and nothing I find as a possibility or worth checking seems to help with such a great disparity.

Thank you, in advance,

Ray H.

  • 1
    Which file system is being used? If it’s ZFS, then you may have some snapshots to prune – matigo Sep 08 '22 at 22:33
  • I appreciate that answer! I am using ext3 file system for the locally installed disk, however. :( – boosted3svt Sep 08 '22 at 22:40
  • When was the disk last checked for problems? Try: sudo tune2fs -l /dev/sd3 | grep checked. See also here. – Doug Smythies Sep 08 '22 at 23:49
  • Hello Doug, Last check was at the beginning of June:

    Last checked: Wed Jun 1 09:39:46 2022

    I will certainly force a new check and update if that made a difference in the reported used amounts of sda3

    – boosted3svt Sep 08 '22 at 23:59
  • 2
    I see that /dev/sdb1 is also 1.6TB. This suggest the possibility that a mount failed, lots of data was written to the mount point, and then later the mount succeeded (obscuring the data written upon the mount point). Unmount /dev/sdb1 and see if du suddenly 'sees' all the storage used. – user535733 Sep 09 '22 at 03:35
  • 1
    @user535733 - This is exactly what the issue was. I am not sure how to mark your post as an answer, however please rest assured that I was able to resolve the issue entirely based on your answer. Unmounted, disconnected entirely, and looking in the mount point to find nearly 1.6TBs of data that was replicated (backups) while the mount had been in a failed state. Thank you immensely. – boosted3svt Sep 10 '22 at 03:50

0 Answers0