5

So, long story short, I have some, er, sensitive data that I'd like to protect from people trying to snoop around. Let's say it's in a folder on my desktop called My Secrets.

However, I'd like to retain some sort of method to destroy this data to make it unrecoverable, in such a way that it is impossible to recover and that there is no proof that this data even existed in the first place.

I'd like to be able to preserve my Ubuntu installation and any/all non-sensitive data, so a complete nuke (sadly) isn't an option.

How can I achieve this in Ubuntu?

Ideally, I'd also like to be able to trigger this deletion at the drop of a pin, from which point there is no stopping the (at the very least partial) destruction of my data. I'm also willing to use a solution that requires setup (for, say, any future data that needs storage).

Kaz Wolfe
  • 34,122
  • 21
  • 114
  • 172
  • May we assume you use rotating magnetic disks (HDD) as storage and no Flash memory (SSD, USB drives, SD cards, ...)? – Byte Commander Sep 09 '16 at 19:03
  • @ByteCommander To make it easier, sure. But, ideally, I'd like a solution that works on anything. – Kaz Wolfe Sep 09 '16 at 19:04
  • 1
    Pfff, high maintenance much? I still think a sledgehammer is the ebst option. – David Sep 09 '16 at 19:06
  • @DavidCole-GrammarPolice I'd like to keep my functioning system. – Kaz Wolfe Sep 09 '16 at 19:07
  • 1
    On SSDs, you either need to use encryption starting from the very first second (your sensitive data may never have touched the raw disk) or you need to fiddle around with special ATA commands to make the SSD controller firmware perform a complete nuke of all flash cells of the entire device, including the spare ones. The SSD might also have hardware encryption, then it might be enough if the controller discards the old key and creates a new one. – Byte Commander Sep 09 '16 at 19:07
  • The only thing that meets your requirement of almost immediate quick and securely irrecoverable deletion is setting up a strongly encrypted partition or container file (don't forget to encrypt swap too as well then) which allows overwriting its encryption header data to trash it. I think the LUKS stuff is designed for that, but don't remember that well... – Byte Commander Sep 09 '16 at 19:26
  • @ByteCommander If that works and can grant plausible deniability... then we have a decent answer... maybe? – Kaz Wolfe Sep 09 '16 at 19:44
  • @KazWolfe - There is no such thing as "plausible deniability" lol – Panther Sep 09 '16 at 19:52
  • What about BleachBit? – You'reAGitForNotUsingGit Sep 09 '16 at 21:25
  • What is your threat model? Are you trying to protect yourself from a snooping younger sibling or from a Three Letter Agency. If it's a TLA, how much are they willing to break (I mean "bend") the law to retrieve the data. Is Rubber hose cryptanalysis on the table? – Cort Ammon Sep 09 '16 at 22:33

2 Answers2

6

shred from GNU coreutils was specifically designed for this purpose.

From man shred:

Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data.

shred actually reads random bytes from /dev/urandom and overwrites the files content with those, at the end optionally overwrites the contents by zeroes (from /dev/zero). So if you want to reinvent the wheel, you can do this by hand but better to use shred which is optimized already for the task.


For example, for any given file my_secured_file.txt, you can do:

shred my_secured_file.txt

Here:

  • -v for verbosity
  • -z for overwriting the file with zeroes afterwards, to hide shredding
  • -n 5 is for number of iterations, default is 3

You can increase the number of iterations if you want although the default is enough or even remove the file (-u, --remove).

Check man shred.


As shred operates on files, for doing the operation on all files of a directory (recursively) e.g. my_secret_dir:

shopt -s globstar
for f in my_secret_dir/**/*; do shred -vzn 5 -- "$f"; done

Or find:

find my_secret_dir -type f -exec shred -vzn 5 -- {} +

Note:

shred has the caveat that it can't work properly on the journaling, caching, RAID, compressed file systems. Quoting man shred:

CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:

  • log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

  • file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems

  • file systems that make snapshots, such as Network Appliance's NFS server

  • file systems that cache in temporary locations, such as NFS version 3 clients

  • compressed file systems

    In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal mode, which journals file data in addition to just metadata. In both the data=ordered (default) and data=writeback modes, shred works as usual. Ext3 journaling modes can be changed by adding the data=something option to the mount options for a particular file system in the /etc/fstab file, as documented in the mount man page (man mount).

    In addition, file system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a shredded file to be recovered later.


In Ubuntu, if you are using ext4 filesystem which is also a journaling filesystem, the journal mode is the default for metadata, not for data (data=ordered is the default), so you should get the expected result with shred-ing unless you changed the default.


As a side note, you can find the default filesystem options by:

sudo dumpe2fs -h /partition |& grep 'Filesystem features'

Example:

% sudo dumpe2fs -h /dev/sda3 |& grep 'Filesystem features'
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize

The has_journal indicates that this is a journaling FS and the default journal option(s) are:

% sudo dumpe2fs -h /dev/sda3 |& grep 'Journal features'
Journal features:         journal_incompat_revoke

Both at once:

% sudo dumpe2fs -h /dev/sda3 |& grep 'features' 
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Journal features:         journal_incompat_revoke
heemayl
  • 91,753
  • The -u option to delete (unlink) the file after shredding is probably more useful than using 5 passes instead of the default 3. Also shred has no recursive option and operates only on files, that means to delete everything in a nested directory tree, you must combine it with find. – Byte Commander Sep 09 '16 at 19:11
  • 1
    It should also be noted that shred can only securely wipe a file if the mapping between file system blocks and hard disk is guaranteed to be constant. This is not the case on Flash storage like SSDs, USB drives or memory cards as well as hybrid HDD/SSD drives and possibly RAID drives or other mappings. Same goes for compressed file systems or devices that take snapshots. File systems that use caching or data journaling (store data to be written in a special journal file before actually writing it to the data area) may also leave intact data fragments behind after shredding. – Byte Commander Sep 09 '16 at 19:19
  • @ByteCommander Edited. – heemayl Sep 09 '16 at 19:43
  • 3
    The "theory" that you have to write to a disk or file system more then once has been debunked long ago - http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/ and http://skeptics.stackexchange.com/questions/13674/is-it-possible-to-recover-data-on-a-zeroed-hard-drive. Please do not spread FUD =) – Panther Sep 09 '16 at 19:48
  • Nice, please note that ext3 and ext4 do not have data journaling on by default though (default is option data=ordered). They do only journal metadata. – Byte Commander Sep 09 '16 at 19:50
  • 1
    @bodhi.zazen Hmmm, thats for extreme pedant, anyway made an edit, check now. – heemayl Sep 09 '16 at 19:59
  • @ByteCommander Thats correct, but i had to go through the source, couldn't depend on arcane man shred. – heemayl Sep 09 '16 at 20:01
-3

Here's an off the wall suggestion: store the sensitive data only in an encrypted, password-locked cloud storage, with no shortcut folder in your computer (i.e. don't install Dropbox or similar, which creates a local mirror of the remote storage) -- just a bookmark in your browser. When you want to remove evidence on your local system of the sensitive data, delete the bookmark and wipe the browser history (or, ideally, use a high security browser variant or setting that automatically secure wipes the history every time you close it). Ten seconds or so, and there'll be no way for anyone to know where to start looking, short of a forensic level complete system search (extremely unlikely unless you're an international spy or child porn trafficker).

Zeiss Ikon
  • 5,128
  • OP said they already stored the data in the disk!! – Anwar Sep 09 '16 at 19:47
  • 4
    The "theory" that you have to write to a disk or file system more then once has been debunked long ago - http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/ and http://skeptics.stackexchange.com/questions/13674/is-it-possible-to-recover-data-on-a-zeroed-hard-drive . Please do not spread FUD =) . Encryption is not completely reliable and there are multiple ways to crack it see - http://imgs.xkcd.com/comics/security.png – Panther Sep 09 '16 at 19:49