-1

Most people will think either using dd or sfill to usually wipe free space with zeros... however, the limit with FAT32 drives are that the largest file size can only be 4 GB.

How can I create multiple, smaller, files, to wipe free space of a FAT32 drive, to overcome this limit?

vidarlo
  • 22,691
  • 1
    Not a duplicate... Can't use dd to fill a full fat32 drive... And my question was how to wipe in ZEROs @karel – Andrew Larson Mar 31 '19 at 16:18
  • The accepted answer to the linked question does not use dd. It uses command-line utility called shred and has an option to set all bits to zero after the last iteration by adding the option -z to the shred command. – karel Mar 31 '19 at 18:10
  • @karel, is shred included with Ubuntu by default? – Andrew Larson Mar 31 '19 at 18:32
  • The shred program is provided by the coreutils package which is included with Ubuntu by default and is installed at /usr/bin/shred – karel Mar 31 '19 at 18:38
  • Thanks, I'll make sure to update my answer to use shred instead then, including the scripts. – Andrew Larson Mar 31 '19 at 19:26

2 Answers2

2

This really sounds like an XY problem.

sfill, or shred which is more generally available (part of coreutils) overwrite the contents of existing files (if everything goes well, e.g. the file system overwrites in place, and see other gotchas mentioned in their manuals).

If you wanted this, you could do so with shred -n 1 --random-source /dev/zero; or a tiny shell script that gets the file's size and then does a dd conv=notrunc if=/dev/zero of=the_file_to_be_zeroed_out bs=... count=....

But as far as I understand, this is not what you need. (Unless can make sure that you only ever delete a file after zeroing it out, which sounds truly cumbersome and hardly feasible.) What you need is to zero the space that's currently unused by any file, so that instead of the remains of previously deleted files (which would still need to be compressed) the unused area is as compressible as possible, that is, preferably full of zeros.

You should create a new file that's as large as possible, and is full of zeros. Go with something like dd if=/dev/zero of=tmpfile bs=1M, wait until it exits with the error message "No space left on device" and then delete this file. Your image is ready to be compressed – but don't forget to umount it first!

egmont
  • 8,225
  • I already figured out my problem... and updated my question @egmont

    Reload the page and take a look, it's not a question anymore rather now a tutorial.

    – Andrew Larson Mar 30 '19 at 10:20
  • An important piece of information is lost during your edit: namely that your goal is to make the image compressible. If the goal is still that, why srm that overwrites with random, rather than a simple rm? – egmont Mar 30 '19 at 10:24
  • srm doesn't write random when using -llz

    These args allow for 1 pass (-ll) and the last pass to be zero wipe (-z) making this a 1 pass zero wipe

    – Andrew Larson Mar 30 '19 at 10:25
  • Okay, but you still install a package whose purpose is to safely delete, then use it asking it not to be safe; whereas you could just as well not install any additional tool and just use rm. – egmont Mar 30 '19 at 10:27
  • rm may remove the file, but the file is still there. When I want to make an image, I want to compress as much as I can. An image file will still hold that information, which is why it needs to be written over with 0's aka needs secure-delete's srm command – Andrew Larson Mar 30 '19 at 10:29
  • You create that file with dd if=/dev/zero, it is full of zeros, overwriting with zeros changes nothing (just does the work once again, taking time etc.). – egmont Mar 30 '19 at 10:31
  • I see what you mean, however, even using rm, the notions that there used to be a file there is still recorded. And same if I was to do it with the folder using rm -r. On large FAT32 drives, those tiny bits that separate zeroed out free space clunks up the compression, not being as good or as small as the image could be. That's my justification for the srm. As for the dd command, I could have used any mode, and it wouldn't matter, I just needed dummy files. – Andrew Larson Mar 30 '19 at 10:36
  • "the notions that there used to be a file there is still recorded" – I'm not sure what you're exactly talking about, and especially why it wouldn't be the case with srm. E.g. sure the filename may not get overwritten with zeros, or some other tiny difference that's hardly measurable after compression, if at all, but it'll be the same with srm. If you really need to squeeze out every single bit of it, you should copy your files anew into a brand new image. – egmont Mar 30 '19 at 10:40
  • Now think about bootable drives... you can't just simply copy the files to an img. – Andrew Larson Mar 30 '19 at 10:48
  • That's a good point. If you need to squeeze out every single bit of compression, because literally every byte matters to you in the compressed image, then you should create a brand new image and install the boot loader too by some means. I doubt it's the case. If a tiny extra size compared to a brand new image's compressed size is not a problem, then I don't understand why tiny leftovers such as a record of a file called emptyfiled having existed matters. – egmont Mar 30 '19 at 10:53
  • And I still don't see why srm would be any better than rm for this role, in fact I see 2 reasons why it's worse: it's a nonstandard tool that needs to be installed, and it fills the files with zeros once again for no benefit (just wasting resources). – egmont Mar 30 '19 at 10:53
  • I finally got what you meant... fixing my answer now. Instead of using dd... going to be using fallocate. – Andrew Larson Mar 30 '19 at 11:31
2

These instructions are outdated, if you want to reliably zero out the data on your FAT32 drive, use the script. (if you can, feel free to edit and update these instructions)

  1. Always make sure you are updated, and install the secure-delete tool

     sudo apt-get update
     sudo apt-get install secure-delete
    
  2. Mount the FAT32 drive

    ###On Linux

     fdisk -l
     sudo mount -t vfat /dev/sdb1 /PATH/TO/MOUNTED/DRIVE
    

    (where sdb1 is your sdxx volume name)

    ###With WSL

     sudo mount -t drvfs F: /PATH/TO/MOUNTED/DRIVE
    

    (where F would be your drive letter)

    Also there may be a bug where the mount doesn't work if trying to mount to existing folder name in /mnt, in which case if you tried to mount, you need to unmount by sudo umount /mnt/f, restart WSL, delete the folder (sudo rmdir /mnt/f), recreate the folder (sudo mkdir /mnt/f), and finally mount again

    (where f would be your drive letter as lowercase)

  3. Create a temporary folder at the root of the drive and move into it

     mkdir /PATH/TO/MOUNTED/DRIVE/tmp
     cd /PATH/TO/MOUNTED/DRIVE/tmp
    
  4. Find free space then create dummy files

     df -h /PATH/TO/MOUNTED/DRIVE
     for i in $(seq START ( END-1 )); \
       do fallocate -l 1G emptyfile${i} && echo Created ${i} out of ( END-1 ); \
     done
    

    where START is 1 GB and END is the free space shown by df -h -> how many gigabytes to write, e.g.

     for i in $(seq 1 ( 10-1 )); do \
       fallocate -l 1G ${i} && echo Created ${i} out of ( 10-1 ); \
     done
    

    This would make nine 1 GB "emptyfile"s.

  5. Find last bit of free space to write to

     df  /PATH/TO/MOUNTED/DRIVE
     fallocate -l ( REST-1 ) emptyfileEND
    

    (where REST is the free space shown in df -hB)

  6. Go back to the root of the drive and now delete the tmp folder with srm (a secure-delete tool)

     cd /PATH/TO/MOUNTED/DRIVE
     srm -llrvz /PATH/TO/MOUNTED/DRIVE/tmp
    

I created a script, and it works without using fallocate in favor of truncate, and should now work well. Instead of allocating 1 GB, I instead made the script allocate a byte shy of 4 GiB so I'll fill up the drive better. (the last truncate will make a file less than 4 GiB)

###Bash Script (compatible with WSL) (download)

  • dd bs=1 is painfully slow, isn't it? Use at least a few kilobytes or perhaps megabytes (a power of two). E.g. bs=1048576 count=1024 for writing a gigabyte. – egmont Mar 30 '19 at 10:36
  • 2
    That is an error on my part, I meant to have the two switched. – Andrew Larson Mar 30 '19 at 10:55
  • 1
    Be aware that in this case dd allocates 1 GB of memory out of your available RAM, which might be somewhat aggressive, or again perform badly if apps need to start swapping out, or just some read cache is dropped. I see a slim chance for the kernel to optimize out this situation using CoW, knowing to handle /dev/zero specially, but I doubt it happens. That's why I'd say that about a megabyte, or maybe up to 256 MB-ish of block size feels like a reasonable compromise to me. – egmont Mar 30 '19 at 11:20
  • Also finally noticed and changed that @egmont, now it's using fallocate. – Andrew Larson Mar 30 '19 at 11:36