0

I am going to make a shell script file which compress all files from the home directory, but the tar is not running after I have changed the permissions.

#!/bin/bash
tar -zxvf homefiles.tar.gz /home/
pa4080
  • 29,831
  • you will need to be a little more specific for an exact answer but my go about would be using a cron task. – Jermayne Williams Feb 26 '18 at 18:21
  • 3
    That won't fly. You can not remove the hidden files and config files in /home/$USER. Plus I doubt there is a log that logs when a file is read. Changed yes, but read? There is atime but that is not used in normal linux anymore. Also: atime is done on inodes so change a hardlink and all files below it change seriously messing up what you want. – Rinzwind Feb 26 '18 at 18:27
  • That wouldn't be the best practise. you may delete things that you may need, what type of content are you trying to remove? – Jermayne Williams Feb 26 '18 at 18:30
  • 3
    You can not determine a filelist from your filessytem of files that where used at a certain time. You need the atime option on your mountpoint and that option is no longer used because it kills SSDs. And if you only archive the files: why care about the "one week"; just archive everything. – Rinzwind Feb 26 '18 at 18:34
  • 2
    Look at tar's manpage: -x eXtracts a tarfile. You need -c to Create a tarfile. Also your shebang line looks wrong. It should be #!/bin/bash (without blanks in-between). – PerlDuck Feb 26 '18 at 19:11
  • @PerlDuck Just to be clear, #! /bin/bash is also fine, see WP: “White space after #! is optional.” – dessert Feb 26 '18 at 22:35
  • @dessert Yes, but it was like #! /bin /bash before — like the whole question was different before the OP edited it when it turned out that the original question won't get a proper answer. – PerlDuck Feb 27 '18 at 10:02
  • Could you please add a little more detail? What exactly did you do, what did you expect to happen and what happened instead? Did you encounter any warning or error messages? Please reproduce them in their entirety in your question. You can select, copy and paste terminal content and most dialogue messages in Ubuntu. Please [edit] your post to add information instead of posting a comment. (see How do I ask a good question?) – David Foerster Mar 02 '18 at 13:18

1 Answers1

9

It is not clear: What permissions have you changed? But I will ignore this message :)

Normally system users doesn't have write permissions outside their own home directory, that by default is /home/<user>. Also user-A can't read most of the files of user-B. So according to the path in your question, to backup all files and directories located in /home, you should run the command with root's privileges, by using sudo.

In addition you are using wrong option - x instead c - use tar --help:

tar -cf archive.tar foo bar  # Create archive.tar from files foo and bar.
tar -xf archive.tar          # Extract all files from archive.tar.

So the right command should be one of these:

sudo tar -zcvf /home/homefiles.tar.gz /home/ --exclude=/home/homefiles.tar.gz # the backup will be created in the directory `/home`
tar -zcvf /home/<user>/homefiles.tar.gz /home/<user> --exclude=/home/<user>/homefiles.tar.gz

Here is an extended example for you:

How to create custom backup script

Let's assume along with the /home directory there are also Apache and MySQL servers and we want to make more complete backup of the system.

1. Create a file named mybackup; make it executable; locate it in /usr/local/bin to be accessible as shell command system wide. Create a directory where the backup files will be stored:

sudo touch /usr/local/bin/mybackup && sudo chmod +x /usr/local/bin/mybackup
sudo mkdir /var/backup

Paste the following script as content of the file /usr/local/bin/mybackup and save it:

#!/bin/bash

## Get the current date as variable.
TODAY="$(date +%Y-%m-%d)"

## Delete backup files older than 2 weeks before create the new one.
find /var/backup/ -mtime +14 -type f -delete

## MySQL Section. The first line is if you are using `mysqldump`,
## the next line is for `automysqlbackup`. I'm using both.
mysqldump -u'root' -p'<my-pwd>' --all-databases | gzip > /var/backup/mysql-all-db.sql.gz
automysqlbackup

## Tar Section. Create a backup file, with the current date in its name.
## Add -h to convert the symbolic links into a regular files.
## Backup some system files, also the entire `/home` directory, etc.
## --exclude some directories, for example the the browser's cache, `.bash_history`, etc.
tar zcvf "/var/backup/my-backup-$TODAY.tgz" \
/etc/hosts /etc/sudoers* /var/spool/cron/crontabs /etc/cron* \
/etc/apache2 /etc/letsencrypt /etc/php/7.0/apache2/php.ini \
/etc/phpmyadmin/apache.conf /etc/mysql/debian.cnf \
/etc/ssh/sshd_config* /etc/pam.d/sshd \
/usr/local/bin \
/var/backup/mysql-all-db.sql.gz /var/lib/automysqlbackup/latest/*.sql.gz \
/root \
/home \
/var/www \
--exclude=/home/<some-user>/.composer --exclude=/home/<some-user>/.npm

## MySQL Section - remove the DB backup files, if you want:
#rm /var/lib/automysqlbackup/latest/*.sql.gz
rm /var/backup/mysql-all-db.sql.gz
  • If this is a VPS, maybe you would want to access your backup file from Internet through the web browser. In this case we could encrypt the file for additional security. If you are fan of 7zip add some command as the next to the bottom of the script:

    rm /var/www/html/the-location/*
    7za a -tzip -p'<my-strong-pwd>' -mem=AES256 "/var/www/html/the-location/my-backup-$TODAY.tgz.7z" "/var/backup/my-backup-$TODAY.tgz"
    
  • If this is a desktop, maybe you will want to exclude the entire Downloads directory for each user. You could exclude also all files larger than or/and excluding certain file's extensions.

  • Please note all commands used above are located in /bin or /usr/bin that are listed into the default $PATH of Cron. If you are intend to use Cron job to automate the task and you have commands (scripts) that are located outside of these directories, you should use /the/full/path/to/the/script :)

2. To crate a backup manually, now you can use this command:

sudo mybackup

3. To automate the the task you can add a new entry in root's crontab by the command sudo crontab -e. For example to execute the script every night at 1:15 the Cron job definition should be:

15 1 * * * /usr/local/bin/mybackup > /var/log/mybackup-cron.log 2>&1
  • this will create also the log file /var/log/mybackup-cron.log that will contain and the error messages 2>&1 if there are any. Read the log periodically to be sur everything works fine.

Alternatively, I would prefer to create a script in /etc/cron.daily/:

sudo touch /etc/cron.daily/mybackup && sudo chmod +x /etc/cron.daily/mybackup

The content of the file should be something as this:

#!/bin/sh
test -x /usr/local/bin/mybackup || exit 0
echo -e "*** Log Begin $(date +%Y-%m-%d) ***\n" >> /var/log/mybackup-cron.log
/usr/local/bin/mybackup > /var/log/mybackup-cron.log 2>&1
echo -e "*** Log End $(date +%Y-%m-%d) *** \n" >> /var/log/mybackup-cron.log

Update:

The above script could be found in the GitHub repository, named Simple Backup Solutions.

pa4080
  • 29,831
  • Thanks. How can i compress all files which havent't been used in one week with tar? Do I have to use cron job? –  Feb 27 '18 at 06:32
  • 1
    @20_90, In the comments under the question, Rinzwind gave you an answer of this question. I think I can't add anything to this answer. – pa4080 Feb 27 '18 at 09:53
  • @20_90 -not- possible in regular Ubuntu. You will need to add atime to your mount options and that option got removed due to the fact it --kills-- SSDs: atime means to write every file at any time it is changed making it do a lot of writes. Deadly for an SSD. We now have something that writes to disk once every so often but that also killed the ability to know when a file was read. – Rinzwind Feb 27 '18 at 10:02
  • @Rinzwind Is atime as a mount option really removed? My manpage (17.10.1) still mentions it. I thought it's just highly discouraged and no longer default (since years). – PerlDuck Feb 27 '18 at 10:06