1

UPDATE 2
Thank you all for the help. It turns out to be like what @steeldriver suggested.
Someone rebooted the machine and manually mount my /dev/sdc1 back to the original path /mnt/WD.
What I did was to umount and disconnect the external drive /dev/sdc, and rm the still remaining /mnt/WD on /dev/sda1, then / turns back normal with df and du.
Finally, connect and mount back the /dev/sdc1.

So, what the file system
I have 2 internal drives and 1 external drive, all in ext4 format.
/home is on sdb1, /mnt/WD is on sdc1 which is the external drive.

df command showing /dev/sda1 is full, mounted on /
But, I du -h --max-depth=1 -x shows only 3.3G, my sda1 is supposed to be 44G.

Any ideas what I should be doing next?

root@X3650:/# df
Filesystem      1K-blocks      Used  Available Use% Mounted on
udev              2013436         0    2013436   0% /dev
tmpfs              405068      6636     398432   2% /run
/dev/sda1        45586292  44282388          0 100% /
tmpfs             2025340         0    2025340   0% /dev/shm
tmpfs                5120         0       5120   0% /run/lock
tmpfs             2025340         0    2025340   0% /sys/fs/cgroup
/dev/sdb1        88874936  35925504   48391816  43% /home
tmpfs              405068         8     405060   1% /run/user/0
/dev/sdc1      5813235212 196499792 5323694628   4% /mnt/WD
tmpfs              405068         4     405064   1% /run/user/111

root@X3650:/# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    1   50G  0 disk
├─sda1   8:1    1 44.4G  0 part /
├─sda2   8:2    1    1K  0 part
└─sda5   8:5    1  5.6G  0 part [SWAP]
sdb      8:16   1 86.6G  0 disk
└─sdb1   8:17   1 86.6G  0 part /home
sdc      8:32   0  5.5T  0 disk
└─sdc1   8:33   0  5.5T  0 part /mnt/WD
sr0     11:0    1 1024M  0 rom

root@X3650:/# du -h --max-depth=1 -x
56K     ./tmp
4.0K    ./lib64
8.0K    ./media
245M    ./lib
36M     ./srv
12M     ./sbin
2.5G    ./usr
4.0K    ./mnt
8.3M    ./etc
4.0K    ./opt
23M     ./root
9.5M    ./bin
16K     ./lost+found
422M    ./var
35M     ./boot
3.3G    .

UPDATE

root@X3650:~# df -i
Filesystem        Inodes  IUsed     IFree IUse% Mounted on
udev              503358    402    502956    1% /dev
tmpfs             506335    591    505744    1% /run
/dev/sda1        2916352 201888   2714464    7% /
tmpfs             506335      1    506334    1% /dev/shm
tmpfs             506335      4    506331    1% /run/lock
tmpfs             506335     15    506320    1% /sys/fs/cgroup
/dev/sdb1        5677056 103164   5573892    2% /home
tmpfs             506335     14    506321    1% /run/user/111
tmpfs             506335     11    506324    1% /run/user/0
/dev/sdc1      183140352 384429 182755923    1% /mnt/WD
Cayprol
  • 11
  • @ubfan1 hey, I thought max-depth means to show the sum of used space of everything inside that folder. Without --max-depth, i still get "3.3G . " in the very end. – Cayprol Jul 18 '18 at 02:59
  • What is the output of df -i? See if all the Inodes are taken up. – Terrance Jul 18 '18 at 03:29
  • 2
    Sometimes the cause is writing to a mountpoint when there's not actually a device mounted there - the used space is hidden when the device subsequently gets mounted – steeldriver Jul 18 '18 at 03:38
  • @Terrance df -i seems fine to me, am I reading it correctly? Result is updated in eidt. – Cayprol Jul 18 '18 at 04:33
  • Yep, that is good. I have seen them all used before causing a drive to read full. Just always one more thing to cross off the list causing the problem. Thank you! – Terrance Jul 18 '18 at 04:34
  • @steeldriver Hi, are you suggesting I manually mount /dev/sda1 to / ? – Cayprol Jul 18 '18 at 04:34
  • 1
    No - I'm suggesting that you manually unmount /dev/sdc1 and then see if there are still files at /mnt/WD (and the same for /dev/sdb1 and /home - although note that you will need to drop to recovery mode in order to unmount /home) – steeldriver Jul 18 '18 at 09:26
  • @steeldriver hi, thx, your suggestion turned out to be true. Edit in the OP. – Cayprol Jul 19 '18 at 00:55

0 Answers0