11

How can I back up my whole system and be able to restore perfectly to where I was? I have installed some nice themes and customized quite a few things (graphically) and wouldn't want to repeat this process if I need to re-install.

I guess my question is, do I have to only backup the home folder or do I have to do the whole system root folder. Many topics have advised not to backup the root folder but has let me confused.

I have no problem installing Ubuntu from scratch again and importing my home folder, as long as it gives me everything back in terms of the UI and custom themes etc. I am not fussed about the actual data such as Documents, Videos etc. I have alternative methods for these via cloud based solutions. Also, I have some commands in fstab but not sure this is stored in the home directory as well?

My idea was to use rsync and use

sudo rsync -aAXv --delete --exclude=/dev/* --exclude=/proc/* 
  --exclude=/sys/* --exclude=/tmp/* --exclude=/run/* --exclude=/mnt/* 
  --exclude=/media/* --exclude="swapfile" --exclude="lost+found" 
  --exclude=".cache" --exclude="Downloads" --exclude=".VirtualBoxVMs" 
  --exclude=".ecryptfs"

Then set up a cron to schedule this.

Can you help?

Zanna
  • 70,465
Maverick32
  • 197
  • 1
  • 3
  • 11

4 Answers4

8

The reason of the advices to not backup the "/" folder is this: typically, there are many virtual (and, sometimes, physical) filesystems attached to it. A virtual filesystem is like a /proc: it doesn't have physical files on the hard disk, instead listing/reading/writing their file structure manipulates some data structures of the kernel. For example, writing 1 into /sys/bus/pci/rescan doesn't write anything to anywhere on a hard disk, instead it re-scans the PCI bus for new devices.

Backing them up would be meaningless or they might be even harmful.

If your goal is to back up everything, then the best what you can do is that you back up everything. However, backing up / might be problematic because of the problem above.

However, there is a simple trick to solve that. Linux knows the thing so-named bind mount: it means, that you can mount a filesystem multiple times.

For example, a

mount /dev/sda7 /mnt/root

will mount your root filesystem (considering if your root is, for example, sda7) to the directory /mnt/root. You will see everything in it, except any sub-mounts, including the virtual filesystems.

The trick is this: after that, you can safely backup /mnt/root with any tool you wish to, including rsync.

Note also, this is only a recursive file copy. You have no protection for possible file inconsistency issues what happen. Imagine if a database has two files, for example, /var/lib/postgresql/11/main/base/13731 and /var/lib/postgresql/11/main/base/13732 which refer to eachother. If the database engine writes something into the first and to the second, while your backup process runs, then it is possible that the first will be backed up and the second won't. Thus, your database will become crap after a restoration.

This is also a reason, why you will probably find contra-arguments (sometimes quite vehement ones) against backing up your system on this way. However, in practical, user-level use cases, a real problem such this exists only very rarely, maybe the most typical one is when you are playing with a database for some web development task. In a home environment, I simply ignore this problem. In a professional environment, it is practical to use another, different backup for your data which is sensitive for this.

peterh
  • 286
  • Thank you for the much detailed response here!! I'm new to linux (from windows) and i'm not really looking to do any web dev or db work but value your response as it gives me a bigger scope on what could happen in the future.

    May i ask, what you would recommend for me? I really just want to keep my nice looking UI with the custom themes i have put on. Would backing up the whole home folder only suffice?

    – Maverick32 Mar 11 '19 at 12:25
  • Just to add, I'm looking to back this up to my NAS which has a ext4 filesystem. Your suggestion seems ideal by mounting the root dir and then using rsync. I'll give that a go :) – Maverick32 Mar 11 '19 at 12:31
  • @Maverick32 Don't worry, rsync is pretty okay! Run it from cron and it will work. I use rsync for pro environments, too (with a litte bonus scripting). The important thing is, that you don't need to play with sub-mounts and sub-mounts on this way. Insert this /mnt/root thing into your fstab, and run the rsync tool from cron. If you set up a local mailing system, you will get your daily (weekly?) backup report log into your mailbox. So it will near a pro system. – peterh Mar 11 '19 at 12:38
  • @Maverick32 Your NAS doesn't need to use ext4, any unix-compliant fs will be okay (samba on linux is usable for w$es, too, but it is still uni-compliant). Check also your rsync flags, it should also copy hard links. I use rsync -vaH --delete src/ target/ which looks like an abbreviation of your rsync. If your NAS is Linux-based, it is using probably ext4. – peterh Mar 11 '19 at 12:40
  • Note for the postgres example, an inconsistent backup is solved with WAL archiving. – OrangeDog Mar 11 '19 at 13:14
  • @OrangeDog Wow, it is very useful to know it :-) Until that, I've used a transactional pg_dump into a local file, but I've feared that it will cause locks for large DBs. – peterh Mar 11 '19 at 13:26
  • @peterh I agree with the paragraph where you speak about "You have no protection for possible file inconsistency issues". I'd add that this is exactly what LVM can do for you. Just create a snapshot, so that the content will not vary, copy it with rsync and then delete the snapshot. – frarugi87 Mar 11 '19 at 14:07
  • @frarugi87 LVM doesn't protect against that, if the snapshot creation happens in an unsynced state, your snapshot will be crap. Doing this correctly would require application support. – peterh Mar 11 '19 at 14:37
  • @peterh you are right... The issue is a bit reduced, since the copy takes time while the snapshot takes very short time, but in the end the main issue is that if something is modifying files while you perform the snapshot the end result is corrupted... – frarugi87 Mar 11 '19 at 16:04
  • @peterh Database server software is typically designed such that as long as the OS and hardware does what they are supposed to, at any one point in time, the data on disk will at least be consistent. (This is typically accomplished using some kind of journal.) You might lose some writes that were in flight during the snapshotting, but nothing should be corrupted if the snapshotting process actually takes a snapshot rather than something similar-but-not-quite. That's no different from, say, a shutdown due to a power loss. Note: I don't know how well PostgreSQL specifically handles this. – user Mar 11 '19 at 19:57
  • Also, the mount command will try to mount the file system again, and hopefully complain that it's already mounted. Mounting the same file system in two different locations at once is fraught with risk. Unless Ubuntu/Canonical is/are doing something to mount to reinterpret a plain mount as a bind mount, you probably want -o bind added to that command line, at which point you should be mounting / rather than the raw device. – user Mar 11 '19 at 20:00
  • @aCVn No, you can mount the same block device multiple times since around 2.4.x . The kernel will see it as a bind mount and instead of multiple mounts, a bind will happen. Just try it. – peterh Mar 11 '19 at 20:08
  • @aCVn The DB a I know, are typically trying to minimize the data corruption risk, but they don't avoid it. Filesystem-level journaling is a very different thing than DB-level journaling. Btw, my personal opinion is that data corruption due to hardware errors should be prevented by backups and this whole journal-ism ;-) is a false direction. Thus, I use ext4 with turned off journal at home. On company machines, I have journals only to avoid possible accusations on this reason. – peterh Mar 11 '19 at 20:15
  • @peterh rsync and your advice appears to be working well. Thank you! I have however run into quite a few errors relating to "symlink failed: Operation not supported (95)" After frantically googling, i did try to use the -L flag with rsync with no luck. Any ideas what this could be? – Maverick32 Mar 12 '19 at 22:27
  • Could this be in relation to your previous comment "Check also your rsync flags, it should also copy hard links. I use rsync -vaH --delete src/ target/ " – Maverick32 Mar 12 '19 at 22:34
5

Clonezilla to the rescue. Atrocious console interface but gets the job done reliably.

Download image and make a bootable USB drive from https://clonezilla.org/

Boot Clonezilla from USB drive and back up your drive and/or partitions to USB drives or sticks for easy restore. Don't let the UX scare you, it really works.

zx81roadkill
  • 326
  • 2
  • 5
  • 1
    Thank you, i' ve thought about this but i was thinking of a more automated way to backup to my NAS, possibly in an incremental way. However, i appreciate this very much. – Maverick32 Mar 11 '19 at 12:20
2

If you really want to have a backup of just your themes and customization, then you don't really need to make a backup of your whole system. You just need to make a backup of some of your dotfiles.

For example, the changes you made for your windows are in ~/.config/gtk-3.0/settings.ini file. Most of the programs that you install will have a configuration file in ~/.config directory, you just need to make a backup of those configuration files.

Nomi Shaw
  • 425
  • This sounds great, although i would have no idea on what particular files i should be backing up for this purpose? – Maverick32 Mar 11 '19 at 12:21
  • It actually depends on what you want to backup. For example, if you made some changes for VIM, you want to backup your ~/.vimrc – Nomi Shaw Mar 12 '19 at 07:26
  • Sorry to reopen an old post. I was wondering, if I copy my home directory only and restore it to a new install of Ubuntu (same version 18.04) would it retain everything I need? As all my config files are there I guess? Any downsides here? Permissions? – Maverick32 Jun 04 '19 at 19:54
  • Yes, it will retain everything except for the packages that need installation. – Nomi Shaw Jun 13 '19 at 04:45
1

Timeshift - Automated Incremental Backups of Your System Made Easy

Timeshift provides functionality similar to the System Restore feature in Windows and the Time Machine tool in Mac OS.

It can be installed from Ubuntu's repos, but Ubuntu may or may not have the latest version. You can install it by either searching for Timeshift in the Ubuntu Software store, or you can type the following in terminal:

sudo apt-get install timeshift

If having the most up to date version matters to you (you can check the release notes of each version and see if they would impact you for better or worse), you can add the developers repo and install Timeshift from there:

sudo add-apt-repository -y ppa:teejee2008/timeshift
sudo apt-get update
sudo apt-get install timeshift

For more specifics on how to configure Timeshift to suit your specific needs, please check out the developer's github page: https://github.com/teejee2008/timeshift

If this was helpful, please consider giving this post an up-vote. Thanks!

guttermonk
  • 1,004
  • 14
  • 29