6

I've got an external terabyte drive to store my scrap (actually I mean a partition on it, I've got some other partitions there). The FS used is ext3. Even after I delete some files there (so there are at least some hundreds mibs free), Nautilus shows zero free space there and does not allow to even create a directory. How to fix this?

I use Ubuntu 10.10 daily build, last updated ton the day before yesterday (Oct 03, 2010).

fossfreedom
  • 172,746
Ivan
  • 57,065

5 Answers5

10

ext2/3/4 filesystems have a certain percentage of blocks reserved for a "privileged" user; a filesystem might appear as "almost full" yet only root can write to it. My guess is that you are hitting this limit.

By default 5% of the total filesystem size is reserved for the root user. Both the reserved percentage and the "privileged" user can be changed with the tune2fs command.

To change the percentage of reserved blocks to 1%, run (as root):

tune2fs -m 1 /dev/your_disk_partition_device

You can also set the reserved blocks percentage to 0, thus effectively disabling this feature on a certain partition.

To change the privileged user, run (as root):

tune2fs -u username /dev/your_disk_partition_device

More details on both options on the tune2fs man page.

pomsky
  • 68,507
  • Didn't help. And df shows the partition to be 100% filled. I am running fsck now, maybe It could find/fix something. But anyway, you've given a very interesting information to know. – Ivan Oct 03 '10 at 20:50
  • I've set this up, ran fsck, deleted 700 more MiBs, checked the trashcan to be empty, and there's still 0 bytes free... :-( – Ivan Oct 03 '10 at 21:14
  • @Ivan How large is the filesystem? 700MB over 1TB isn't even 0.1%... Can you create a file as root user, e.g., touch a_file? Did fsck report any errors? Filesystems can be automatically remounted read-only if some filesystem error occurs (usually this is done just for the / filesystem, but can be configured for any ext2/3 one). – Riccardo Murri Oct 05 '10 at 07:17
0

Sounds like a dud disk to me (not releasing the space). Someone I was speaking to at the weekend was having the same problem with windows and a new 0.5TB HDD (I have no idea of the make I am afraid). Deleting files would remove them, however the space was not being released.

Is the Disk Utility reporting any SMART failure conditions?

Phil Hannent
  • 1,595
  • SMART used to show couple of remapped bad blocks, as well as Windows chkdisk but there were no free space or data loss problems when I was using Windows and NTFS. Minor problems began and bad blocks appeared after I first plugged this drive to Linux while it was one 99.9999% filled NTFS partition, all problems were fixed by chkdsk. Then I've moved all the data to another drive, reformatted this drive into ext3 and put the data back. Now that's it. – Ivan Oct 04 '10 at 11:10
  • As far as I understand, allocating free space is a matter of file system, not the disk itself, unless the disk finds all those sectors bad and seals them, which seems improbable. – Ivan Oct 04 '10 at 11:10
  • Indeed, I was perhaps thinking of SSD drives which do wear levelling, however given the storage size its unlikely you were using that. Could the problem have been related to a volume shadow copy? Was the NTFS formatted form a Windows 7 PC? – Phil Hannent Oct 05 '10 at 12:57
0

You should try to delete more things just in case, things don't tend to always work with an almost full drive, try using baobab or another directory size visualizer to find out what can and should be deleted.

0

inodes

df -i / 

tune2fs -l /dev/disk

tune2fs will tell you how many inodes were free at boot or mount time, df -i / will tell you how many inodes are free now. You can see if you are out of inodes and from how long ago the filesystem was mounted whether or not it happened suddenly.

Then you can do this to figure out where the inodes are:

 find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

That will tell you how many inodes are in each directory with whichever has the most will be at the bottom when output stops. The problem should be obvious as you'll most likely have one directory or two with millions of entries.

holla @geek_king

0

Processes can still occupy storage, even though the corresponding files have been deleted (filehandle is still open).

Find these with lsof -nP

https://unix.stackexchange.com/questions/68523/find-and-remove-large-files-that-are-open-but-have-been-deleted

Willem
  • 183
  • Although your answer is 100% correct, it might also become 100% useless if that link is moved, changed, merged into another one or the main site just disappears... :-( Therefore, please edit your answer, and copy the relevant steps from the link into your answer, thereby guaranteeing your answer for 100% of the lifetime of this site! ;-) You can always leave the link in at the bottom of your answer as a source for your material.. – Ravan Sep 10 '15 at 00:43