I want a command to shred completely the contents of a folder/directory (which may be inside folders/directories). Also please explain the command.
8 Answers
- Install the package
secure-delete
. - Use the command
srm -r pathname
to remove your folder and files.
The default settings are for 38 (!!!) passes of overwrites which is extreme overkill imho (see more info about this here).
For my usage, I only want a single pass of random data so I use srm -rfll pathname
.
If you want to create a right-click option in the GUI for files and folders, use gnome-actions to call a script like this:
#!/bin/bash
if dialog=`zenity --window-icon=warning --question --title="Secure Delete" --no-wrap --text="Are you sure you want to securely delete:\n\n $1\n\nand any other files and folders selected? File data will be overwritten and cannot be recovered."`
then /usr/bin/srm -fllrv "$@"| zenity --progress --pulsate --text="File deletion in progress..." --title="Secure Delete" --auto-close
fi
If you want more paranoid settings be sure to modify the above script.
-
1
-f fast (and insecure mode): no /dev/urandom, no synchronize mode
.
-l lessens the security (use twice for total insecure mode)
. could you please explain these two things. 38 overwrites is default how do these affect the '38' value. and whyl
two times in-rfll
– Ashu Apr 16 '12 at 14:57 -
2OK, the whole default process is:
1 pass with 0xff (a zero wipe), 5 random passes with /dev/urandom is used for a secure RNG if available, 27 passes with special values defined by Peter Gutmann, another 5 random passes from /dev/random. Then rename the file to a random value & truncate the file. IIRC, /dev/random is considered a better random number generation system. Using -fll we skip the 1+5+27+5 passes and substitute with a single pass of random data, possibly from a less "truly random" generator.
– Veazer Apr 16 '12 at 18:16 -
@Ashu shortened from the man page (Debian Jessie) for srm;
(frist) -l: only two passes
,(second) -l: only one pass
. For others,-f: fast (Non-secure random bits)
and-r: recursive
. I also highly suggest-v: verbose
. I would also suggest running this in ascreen
instance, it can take quite a while on a lot of data. – ThorSummoner Mar 21 '17 at 16:00 -
1
-
You may use Bleachbit's shredder with 'bleachbit -s' as an alternative which is fast and much more widely known. – Habib Feb 27 '21 at 15:11
For files not directories, here's a more simple way instead of -exec shred -u {} \;
type of way:
cd to your directory.
then
find . -type f -print0 | xargs -0 shred -fuzv -n 48
this does 48 passes recursively to the current directory you cd
'ed into.
Hope this helps some.

- 263
- 2
- 6

- 151
-
Adding '-print0' to find and '-0' to xargs? Directories may have whitespace in their names. – weakish Jul 27 '13 at 15:03
-
sudo apt install wipe
$ wipe -rfi dir/*
where the flags used:
-r – tells wipe to recurse into subdirectories
-f – enables forced deletion and disable confirmation query
-i – shows progress of deletion process

- 818
-
For the sake of transparency, wipe has been unmaintained since 2009 and therefore perhaps should be used with caution: https://sourceforge.net/projects/wipe/files/ – Mehrad Mahmoudian Apr 11 '22 at 12:37
-
Wipe Version: 0.24-7 provided in Ubuntu Impish was last updated 2016 on https://github.com/berke/wipe – Demon Apr 12 '22 at 13:49
Shred works only on files. You need to shred the files in the dir/subdirs first and then remove the directories. try
find [PATH_TO_DIR]
and make sure you only see the files you want to delete
find [PATH_TO_DIR] -exec shred -u {} \;
then remove the dirs with
rm -rf [PATH_TO_DIR]

- 12,144

- 2,182
-
can u please explain
{} \;
. also somewhere elsi i had seen the same command as your but it was'{}' \;
what is the difference between the two?? – Ashu Apr 16 '12 at 14:43 -
Since shred doesn't work on directories, how about adding the option -type f to the find commands? – andol Apr 16 '12 at 15:49
-
-
1Cant explain it any better than the find man page .... the {} and ; are part of the -exec option of the find. – Ruediger Apr 16 '12 at 20:03
You probably want to use something similar to this:
find dir -type f -exec shred -fuz {} +
rm -rf dir
First command finds only files and passes them to shred (as many at once as possible - no need to start a new shred process for every file like \; does). Finally, remove the directories too.

- 976
I have inserted the following bash script for this purpose in my .bashrc
function rm2 {
for var in $@
do
if [ -d $var ]
then
nohup $( /usr/bin/find "$var" -type f -exec shred -n 2 -u -z -x {} \;;/bin/rm -rf "$var" ) &
else
nohup /usr/bin/shred -x -n 2 -u -z "$var" &
fi
done
exit
}
If you want to do this from Nautilus (aka 'Files' app), then you could use the nautilus-wipe package.
sudo apt-get install nautilus-wipe
Once installed, there will be two new options when you right-click on a folder: Wipe
and Wipe available disk space
. Choosing Wipe
on the folder will give further options (e.g. number of passes, fast mode, last pass with zeros).

- 1,446
When I need to shred multiple files or an entire directory I simply use shred -vzn 20 ./shredme/*.*
for example, which overwrites all files with any file extension in the "shredme" folder. Then you can use the standard rm -rf ./shredme command to remove the folder itself (or just right click and delete the folder) as all the data has been overwritten 20 times for this example.
I did a quick example of this with a bunch of duplicate images as an example.
shred
command does, then there's your answer. – psusi Apr 16 '12 at 13:17shred
orsecure-delete
? – Ashu Apr 16 '12 at 14:32shred
is not as effective as you think because modern file systems and hardware do not overwrite data in place, but instead journal the changes, or move it around for wear-levelling. Related: https://unix.stackexchange.com/questions/27027/how-do-i-recursively-shred-an-entire-directory-tree – Mike Ounsworth Nov 20 '16 at 03:54