5

I have a system where I would like to create a full system backup. The backup should include all system settings, drivers, user data, etc. on the system storage drive. The backup should be usable to restore the system drive for that specific PC, after exchaning the physical storage device, once the system drive crashes.

There is the situation, that I only have remote ssh access to create the backup. For restoring, I obviously have to (and can) get to the PC physically (to replace the drive, etc.). In additon, the PC doesnt have direct access to the internet. I connect via internet to a jump host and have ethernet access from there to the PC.

With physical access I would boot the PC via a ventoy bootstick, to boot to a gparted live system and use dd to clone the system partitions to some image files. But this isn't possible by remote access.

Is there any alternative? e.g.: Is it possible to use backup strategies like

tar -cvpzf /backup.tar.gz --exclude=/backup.tar.gz --one-file-system

or https://help.ubuntu.com/community/BackupYourSystem/TAR

or is this not suitable in my case? Or is this not possible or maybe not complete enough?

I could remotely move the backup from that PC to a NAS or to the jump host and download it from there (or directly generat the backup to the NAS), so it is really about how to generate a suitable backup and not about how to make sure that the backup file is preserved when the system crashes. And also I will be able to reduce the used disk space to less than 30% before creating the backup, if the current 48% is too close to > 50% (which could prevent any strategies that store the backup on the drive itself).

These are the relevant partitions:

/dev/sdf1        2048    1050623    1048576  512M EFI-System
/dev/sdf2     1050624 3705751551 3704700928  1,7T Linux-Dateisystem
/dev/sdf3  3705751552 3750748159   44996608 21,5G Microsoft Basisdaten
sdf  
16,4T root  disk  brw-rw----
├─sdf1 vfat                  B687-437E                            /boot/efi                                          
512M root  disk  brw-rw----
├─sdf2 ext4                  56aaa632-d318-4ca9-8094-f803b2237e44 /media/sdf2                                    
1,7T root  disk  brw-rw----
└─sdf3 vfat                  30A8-C177                                                                          
21,5G root  disk  brw-rw----
/dev/sdf2                       1822227568  817873716   911719948   48% /
/dev/sdf1                           523244       5360      517884    2% /boot/efi
Micka
  • 81
  • 1
  • 7
  • 1
    Automated disk imaging/cloning? ... This might help(the concept can be done with even a mounted modified Ubuntu ISO): https://clonezilla.org/related-articles/009_Multiple_customized_Clonezilla_on_hard_drive/MultipleCustomClonezilla.html – Raffa Jul 01 '22 at 12:59

2 Answers2

6

"drivers" is not going to work. Those are kernels modules and need to be loaded. You can't copy them without backing up the whole kernel (and that would mean the WHOLE system)

I would suggest limiting this to your personal files and create a script for post-install update and consider the restore as installing a -new- installation (ie. lots of "sudo apt install/purge" and "gsettings" or "sed" commands you execute afterwards to get your preferences back) and not fixing the old system. That means this ALSO works if you want to install a new version of Ubuntu.


Might I add a different approach ...

If I was you I would not use tar but rsync. You can use rsync on a running system plus you can use an external destination. Something like this:

sudo rsync -ahPHAXx --delete --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found} / {user}@{host}:/backup/{date}/ 

(you could remove mnt and media from this list if you do incremental backups (see below); all the others are tmpfs so not suited for a backup) with the added benefit that restoring does it one file after another so no need to watch out for disk space. Plus you can restore one single file if need be.

If you want a full backup and have the space you can have more than one backup by adding a {date} to the destination. On the destination you could delete older backups using some kind of logic (keep 7, 14, 30 days and remove older backups)

rsync can also do incremental backups (so only copying the differences since the last backup) so this lowers bandwidth usage by a lot. How this works is: all backups get a timestamp, your 1st backup is a full backup , all other backups compare the latest timestamp with your current system and then create a backup of the differences. Benefit: you can tell rsync to restore a specific timestamp (ie. "make the system like it was on 13:00 2 days ago).

Rinzwind
  • 299,756
  • is there any possibility to remote backup the whole system, while it is running? – Micka Jul 01 '22 at 10:57
  • 1
    @Micka yes the rsync command does the whole system (see the / before the destination) except for mnt and media (as those are different partitions or disks anyways). The benefit with rsync is the incremental backup: the 2nd and more backups only do the changes made saving a gigabytes of space. – Rinzwind Jul 01 '22 at 11:01
  • One more thing: before running rsync, shut down as much of the software on the system to be backed up as you can. In particular, shutting down any database servers is critical -- copying a database file while it's being updated is virtually certain to corrupt the database. – Mark Jul 01 '22 at 23:41
3

Tested on Ubuntu server 20.04 ... The following procedure will enable you to clone the entire disk of a remote machine after running the system from RAM and detaching all disks/disk partitions ... Risk included of course(but contained as much as possible).

References:

Going to RAM

  • SSH to your server(Keep this SSH connection up all the time and don't close it until the end).

  • Then, become root:

    sudo -i
    
  • Then, save the list of mounted file systems(reference) to a file mounted_fs:

    df -TH > mounted_fs
    
  • Then, try to stop stoppable running services(excluding SSH):

    systemctl list-units --type=service --state=running --no-pager --no-legend | awk '!/ssh/ {print $1}' | xargs systemctl stop
    
  • Then, unmount all that is unused:

    umount -a
    
  • Then, prepare a system environment in RAM(will take around 2.2G) by running the following six commands one after the other in the same order:

    mkdir /tmp/tmproot
    mount none /tmp/tmproot -t tmpfs
    mkdir /tmp/tmproot/{proc,sys,usr,var,run,dev,tmp,oldroot}
    cp -ax /{bin,etc,mnt,sbin,lib,lib64} /tmp/tmproot/
    cp -ax /usr/{bin,sbin,lib,lib64} /tmp/tmproot/usr/
    cp -ax /var/{lib,local,lock,opt,run,spool,tmp} /tmp/tmproot/var/
    
  • Then, run:

    mount --make-rprivate /
    
  • Then, change the system root to that environment:

    pivot_root /tmp/tmproot /tmp/tmproot/oldroot
    
  • Then, mount some needed system parts:

    for i in dev proc sys run; do mount --move /oldroot/$i /$i; done
    
  • Then, restart SSH:

    systemctl restart sshd
    
  • Then, run:

    systemctl list-units --type=service --state=running --no-pager --no-legend | awk '!/ssh/ {print $1}' | xargs systemctl restart
    
  • Then, run:

    systemctl daemon-reexec
    
  • Then unmount the original system root(on the disk):

    umount -l /oldroot/
    
  • Then, run df -h and unmount any mounted remaining disks/disk partitions like /boot.

Now the system on the server is totally running from RAM and the disks/disk partitions are totally detached and you are connected through SSH ... This is, more or less, like if you have booted from a live USB.

Imaging

You can now use dd(with great care) to clone your entire disk/s and either save the cloned image/s on a back up drive on the same server(mount the back up drive first) or on another server/NAS in the same network.

You can also save the image directly on your machine(direct and fast internet link required) like so:

  • Use dd(with great care) to image the entire disk and prepare to send the image from the server to your machine with an nc listener(Change /dev/sda to the disk you want to image on the server):

    dd if=/dev/sda | nc -l 4444
    
  • Then, open a new terminal on your machine to receive the image and save it(Change 10.0.0.100 with the IP of your server):

    nc 10.0.0.100 4444 > disk.img
    
  • Wait for this to finish ... You'll see a message from dd on the server side something like this:

    10737418240 bytes (11 GB, 10 GiB) copied, 500.935 s, 21.4 MB/s
    
  • Then, close the new terminal(or key in it CTRL+C)

Now, you have successfully cloned the server's disk/s.

Coming back from RAM

The easiest way to bring the server back to normal state(system root on disk) is to simply reboot(I strongly suggest rebooting as it’s much safer) ... so, clean up:

rm mounted_fs

Then, reboot(No permanent file systems are mounted so, sending just b to /proc/sysrq-trigger should be fairly safe):

echo "b" > /proc/sysrq-trigger

If, on the other hand, you prefer not to reboot the server then:

  • Mount your original root partition(the one on the disk) or LV to /oldroot(Change device with your root partition or logical volume and refer to the previously saved mounted_fs file):

    mount device /oldroot
    
  • Then, run:

    mount --make-rprivate /
    
  • Then, change the system root back to the one on the disk:

    pivot_root /oldroot /oldroot/tmp/tmproot
    
  • Then, prepare some important system parts by running the following three commands one after the other in the same order:

    for i in dev proc sys run; do mount --move /tmp/tmproot/$i /$i; done 
    
  • Then, unmount /tmp/tmproot:

    umount -l /tmp/tmproot
    
  • Then, run:

    rmdir /tmp/tmproot
    
  • Then, mount the original file systems:

    mount -a
    
  • Then, restart SSH:

    systemctl restart ssh
    
  • Then, start failed services:

    systemctl list-units --type=service --state=failed --no-pager --no-legend | awk '!/ssh/ {print $2}' | xargs systemctl restart
    
  • Then, restart running services:

    systemctl list-units --type=service --state=running --no-pager --no-legend | awk '!/ssh/ {print $1}' | xargs systemctl restart
    
  • Then, run:

    mount --make-rshared /
    
  • Then, run:

    systemctl isolate default.target
    
  • Then, open a new terminal and SSH connect to your server ... If things are working as expected from the new SSH connection, then you're done.

  • Clean up:

    rm mounted_fs
    
Raffa
  • 32,237