shred
from GNU coreutils
was specifically designed for this purpose.
From man shred
:
Overwrite the specified FILE(s) repeatedly, in order to make it harder
for even very expensive hardware probing to recover the data.
shred
actually reads random bytes from /dev/urandom
and overwrites the files content with those, at the end optionally overwrites the contents by zeroes (from /dev/zero
). So if you want to reinvent the wheel, you can do this by hand but better to use shred
which is optimized already for the task.
For example, for any given file my_secured_file.txt
, you can do:
shred my_secured_file.txt
Here:
-v
for verbosity
-z
for overwriting the file with zeroes afterwards, to hide shredding
-n 5
is for number of iterations, default is 3
You can increase the number of iterations if you want although the default is enough or even remove the file (-u
, --remove
).
Check man shred
.
As shred
operates on files, for doing the operation on all files of a directory (recursively) e.g. my_secret_dir
:
shopt -s globstar
for f in my_secret_dir/**/*; do shred -vzn 5 -- "$f"; done
Or find
:
find my_secret_dir -type f -exec shred -vzn 5 -- {} +
Note:
shred
has the caveat that it can't work properly on the journaling, caching, RAID, compressed file systems. Quoting man shred
:
CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the
traditional way to do things, but many modern file system designs do
not satisfy this assumption. The following are examples
of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:
log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)
file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems
file systems that make snapshots, such as Network Appliance's NFS server
file systems that cache in temporary locations, such as NFS version 3 clients
compressed file systems
In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal
mode, which journals file data in addition to just metadata. In both
the data=ordered (default) and data=writeback modes,
shred works as usual. Ext3 journaling modes can be changed by adding the data=something option to the mount options for a particular
file system in the /etc/fstab file, as documented in the mount man
page (man mount).
In addition, file system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a
shredded file to be recovered later.
In Ubuntu, if you are using ext4
filesystem which is also a journaling filesystem, the journal mode is the default for metadata, not for data (data=ordered
is the default), so you should get the expected result with shred
-ing unless you changed the default.
As a side note, you can find the default filesystem options by:
sudo dumpe2fs -h /partition |& grep 'Filesystem features'
Example:
% sudo dumpe2fs -h /dev/sda3 |& grep 'Filesystem features'
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
The has_journal
indicates that this is a journaling FS and the default journal option(s) are:
% sudo dumpe2fs -h /dev/sda3 |& grep 'Journal features'
Journal features: journal_incompat_revoke
Both at once:
% sudo dumpe2fs -h /dev/sda3 |& grep 'features'
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Journal features: journal_incompat_revoke