27

I've been trying to move 32.6 GB of files to a folder on an external flashdrive to free up space on my laptop's SSD. After opening up the source folder in Terminal (and running ulimit -S -s unlimited to keep mv from throwing up), mv *from* /media/[username]/8849-14DB/Screenshots/ transferred the first 5.9 GB just fine.

But then, with 26.7 GB still to go:

mv: cannot create regular file '/media/[username]/8849-14DB/Screenshots/Screenshot from 2022-01-06 06-34-27.png': No space left on device
mv: cannot create regular file '/media/[username]/8849-14DB/Screenshots/Screenshot from 2022-01-06 06-34-30.png': No space left on device
mv: cannot create regular file '/media/[username]/8849-14DB/Screenshots/Screenshot from 2022-01-06 06-34-34.png': No space left on device
mv: cannot create regular file '/media/[username]/8849-14DB/Screenshots/Screenshot from 2022-01-06 06-34-39.png': No space left on device
mv: cannot create regular file '/media/[username]/8849-14DB/Screenshots/Screenshot from 2022-01-06 06-35-23.png': No space left on device

[repeated ad infinitum]

This despite the fact that the flashdrive in question is not, in fact, out of space, as shown by:

  • me being able to successfully save a test file to said flashdrive, and
  • both the drive's Properties window and its Disks entry showing that it still has 31.1 GB of free space remaining.

However, when I tried to use the GUI to move the aforementioned test file into the specific directory where I'd been trying to use mv to move the multiple gigabytes of files, I did get a "No space left on device" error, indicating that

  • whatever the issue is, it's specific to that folder, and
  • this isn't a command-line-specific issue.

I looked at this earlier question: Filesystem - No space left error, but there is space. However, the answers to that question were unhelpful to me, as they related to the limited number of inodes available on an ext-family filesystem, whereas, in my case, the destination filesystem is a FAT32-formatted flashdrive.

What issue am I running into that keeps the files from transferring, and how do I overcome it?

EDIT: The target directory has 16,383 files in it.

Vikki
  • 539
  • 1
  • 8
  • 15
  • how large is the external flash drive? – Esther Jul 20 '22 at 18:06
  • @Esther: 63.5 GB total, including (as I said in the question) 31.1 GB of remaining free space. – Vikki Jul 20 '22 at 18:10
  • @Rinzwind: As I said in the question, the flashdrive is not formatted with an ext-family filesystem. – Vikki Jul 20 '22 at 18:21
  • 2
    @Rinzwind it is fat32, so no inodes. But possibly running into filesystem limits for how many files can be in one folder at a time under fat32, see https://superuser.com/questions/446282/max-files-per-directory-on-ntfs-vol-vs-fat32 – Esther Jul 20 '22 at 18:21
  • 1
    how many files are there? more than 65,534? – Esther Jul 20 '22 at 18:22
  • @Esther: 16,383. – Vikki Jul 20 '22 at 18:26
  • 3
    but the filenames are long, so they actually take up enough space that it might make a difference. I don't know exact formulas, but that is the most likely reason. You can use NTFS to get around the limit entirely, or put files in multiple sub-directories. – Esther Jul 20 '22 at 18:27
  • Is this a new flash drive? If so, where did you get it? There are plenty of fake flash drives (even branded as reputable brands) which purports to be large sizes but actually use much smaller storage chips (the onboard controller basically lies and says it has 64GB but actually only has 4GB) so you get situations like this. Even things bought off Amazon arent immune to this, due to the way Amazon allows Marketplace sellers to mix their stock in with Amazons (so a dodgy Marketplace seller supplies Amazon with cheap dodgy drives for fulfilment purposes, Amazon just adds those to the pool). – Moo Jul 21 '22 at 05:02
  • @Moo: Not new, and it's Walgreens store brand (something I've found to reliably live up to its stated storage capacity). – Vikki Jul 21 '22 at 06:07
  • It might also be possible that the USB doesn't actually have 31GB free... plenty of cheap chinese storage drives claim to have higher storage space than real. Some of them go so far as to actually lie to the OS until they run out of space – GACy20 Jul 21 '22 at 10:04
  • 2
    @Vikki "Out of space" is not necessarily physcial space. There are data structures inside the file system that are also limited, and the knee-jerk response for ext2/ext3/ext4 is "out of inodes". Apparently the corresponding knee-jerk response for FAT32 is "too many files in a single directory". Asking about details just indicate that your wording was not clear enough in the first place. – Thorbjørn Ravn Andersen Jul 21 '22 at 12:17
  • 3
    @GACy20 there's no way for a drive to signal to the OS "oh sorry, that space you tried to write into doesn't actually exist", which is why fake flash drives universally corrupt data instead of just giving this error. – user253751 Jul 21 '22 at 13:38
  • 1
    @user253751 They can just give "no space left on device" on write while still reporting empty space when queried for stats. – GACy20 Jul 21 '22 at 13:45
  • 1
    @GACy20 oRLY? Write failure is another error condition, namely EIO /* I/O error */. – Incnis Mrsi Jul 21 '22 at 14:55
  • 1
    @user253751 https://stackoverflow.com/questions/72421193/why-do-write-syscall-fails-with-enospc write and fsync calls CAN return ENOSPC: no space left on device, depending on the os/drivers/filesystem used – GACy20 Jul 21 '22 at 15:48
  • 1
    @GACy20 misses the point again. Yes, the file system can (and should) signal ENOSPC if required block(s) cannot be allocated. But @user253751’s point was about drives (that is, block devices). There is no concept of “(un)free block” on the /dev/sdn level. – Incnis Mrsi Jul 21 '22 at 15:55
  • 1
    16,383 is 2^14-1. – David Conrad Jul 21 '22 at 17:36
  • Also don't forget that the maximum file size on FAT32 is a bit smaller than 4 GiB, which is a problem for large media files. To avoid these limits, but to keep the cross-platform writability, I use exFAT nowadays instead of FAT32. – pts Jul 22 '22 at 07:23

2 Answers2

74

Ext4 filesystems are not the only ones with limitations on the number of files. FAT32 filesystems have a limit on the number of files that can be stored in a single directory. If you are using short names (8 characters + . + 3 character file extension) then the limit is 65,534 files. However, if you use longer names, then every 13 bytes of the name is stored as a separate directory entry, which can greatly limit the number of files you can fit in a directory.

In your case, it looks like each file is actually taking up 4 directory entries, since you have 16,383 files, and 16,383 * 4 is 65,532, which brings you right up to the limit. At a closer look, each filename has 39 characters, which is 39 bytes: exactly 13 * 3. So you have 3 directory entries for each filename, and a fourth for the actual file contents.

You can get around this by either:

  1. formatting the drive as NTFS, which limits the number of files to about 4 billion (should be enough)
  2. putting the files in different sub-directories, since the limitation is on the number of directory entries, and you aren't running into the limit on the total number of files quite yet
Esther
  • 3,600
  • 13
  • 29
  • 4
    Good catch Esther – Rinzwind Jul 20 '22 at 18:42
  • I created a new directory on the destination drive and tried transferring more of the remaining files to it, and spreading them out between multiple subdirectories does seem to be working. (Not going to reformat it as NTFS, since I want to keep it usable with NTFS-incompatible systems, like my three Power Macs.) Thanx! – Vikki Jul 20 '22 at 20:20
  • both NTFS and exFAT have pretty good kernel support on Linux right now, and the exFAT specs have been opened so it should be supported without problem on all platforms – phuclv Jul 21 '22 at 13:09
  • 1
    Can you explain how the limit in sub-directories is established? I always thought the root directory of FAT is limited, but sub-directories would be dynamically extended as needed, but I never cared about the details. – U. Windl Jul 21 '22 at 16:26
  • 2
    @U.Windl apparently it uses a 16 bit index for directory entries, so it's limited to 2^16 directory entries – Esther Jul 21 '22 at 17:56
  • 3
    @U.Windl and then 2 entries are used up for the "." and ".." entries, which leaves you with 65,534 entries per directory – Esther Jul 21 '22 at 18:03
  • @Esther I could not find any reference to the 16-bit index being used for directory entries in any FAT description; do you have a reference? – U. Windl Jul 22 '22 at 12:18
  • 2
    @U.Windl https://cscie92.dce.harvard.edu/spring2021/Microsoft%20Extensible%20Firmware%20Initiative%20FAT32%20File%20System%20Specification,%20Version%201.03,%2020001206.pdf, p33-34 – Esther Jul 22 '22 at 13:13
  • 2
    Breaking these up into separate sub-directories is a good idea anyway, because having tens of thousands of files in a directory can have a huge impact on performance. – Monty Harder Jul 22 '22 at 20:24
  • 1
    number crunching is not my favorite ... but your numbers look rock solid and informative +1 Esther :-) – Raffa Jul 22 '22 at 20:37
  • @Ester Eventually I found it (page 3ff), and I was right: The FAT filesystem does not have that restriction, but it is needed for compatibility with old MS-DOS: "There are many FAT file system drivers and disk utilities, including Microsoft’s, that expect to be able to count the entries in a directory using a 16-bit WORD variable. For this reason, directories cannot have more than 16-bits worth of entries." There is no 16-bit counter for directory entries in the FAT filesystem, however. – U. Windl Jul 23 '22 at 13:33
  • I don't think FAT32 reserves directory entries for "." and "..", does it? – Mark Ransom Jul 23 '22 at 19:14
1

Ext4 file systems have a limited number of inodes. If your filesystem contains a large number of (small) files, it can be "full" despite having lots of disk space (in the sense of "free bytes") left, because it does not have free inodes left.

You can view the available inodes on your file system using

df -i <device/mountpoint>
  • 2
    Two things that show us that this is not the problem are that it was a folder specific issue, and that it was not an ext file system. (As an aside almost all unix file-systems have inode limits) – hildred Jul 22 '22 at 12:52
  • 2
    Yes, but in the future, people might stumble upon this question while having an inode problem, so I think it's useful to still have this answer here. – Heinrich supports Monica Jul 22 '22 at 13:28