7

I have a machine with UEFI BIOS. I want to install Ubuntu 20.04 desktop with LVM on top of RAID 1, so my system will continue to work even if one of the drives fail. I haven't found a HOWTO for that. The 20.04 desktop installer supports LVM but not RAID. The answer to this question describes the process for 18.04. However, 20.04 does not provide an alternate server installer. The answer to this question and this question describe RAID but not LVM nor UEFI. Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?

4 Answers4

18

After some weeks of experimenting and with some help from this link, I have finally found a solution that works. The sequence below was performed with Ubuntu 20.04.2.0 LTS. I have also succeeded with the procedure with 21.04.0 inside a virtual machine. (However, please note that there is a reported problem with Ubuntu 21.04 and some older UEFI systems.

In short

  1. Download and boot into Ubuntu Live for 20.04.
  2. Set up mdadm and lvm.
  3. Run the Ubuntu installer, but do not reboot.
  4. Add mdadm to target system.
  5. Clone EFI partition to second drive.
  6. Install second EFI partition into UEFI boot chain.
  7. Reboot

In detail

1. Download the installer and boot into Ubuntu Live

1.1 Download

  • Download the Ubuntu Desktop installer from https://ubuntu.com/download/desktop and put it onto a bootable media. (As of 2021-12-13, the iso was called ubuntu-20.04.3-desktop-amd64.iso.)

1.2 Boot Ubuntu Live

  • Boot onto the media from step 1.1.
  • Select Try Ubuntu.
  • Start a terminal by pressing Ctrl-Alt-T. The commands below should be entered in that terminal.

2. Set up mdadm and lvm

In the example below, the disk devices are called /dev/sdaand /dev/sdb. If your disks are called something else, e.g., /dev/nvme0n1 and /dev/sdb, you should replace the disk names accordingly. You may use sudo lsblk to find the names of your disks.

2.0 Install ssh server

If you do not want to type all the commands below, you may install want to log in via ssh and cut-and-paste the commands.

  • Install

    sudo apt install openssh-server

  • Set a password to enable external login

    passwd

  • If you are testing this inside a virtual machine, you will probably want to forward a suitable port. Select Settings, Network, Advanced, Port forwarding, and the plus sign. Enter, e.g., 3022 as the Host Port and 22 as the Guest Port and press OK. Or from the command line of your host system (replace VMNAME with the name of your virtual machine):

    VBoxManage modifyvm VMNAME --natpf1 "ssh,tcp,,3022,,22"
    VBoxManage showvminfo VMNAME | grep 'Rule'
    

Now, you should be able to log onto your Ubuntu Live session from an outside computer using

ssh <hostname> -l ubuntu

or, if you are testing on a virtual machine on localhost,

ssh localhost -l ubuntu -p 3022

and the password you set above.

2.1 Create partitions on the physical disks

  • Zero the partition tables with

    sudo sgdisk -Z /dev/sda
    sudo sgdisk -Z /dev/sdb
    
  • Create two partitions on each drive; one for EFI and one for the RAID device.

    sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sda
    sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sda
    sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sdb
    sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sdb
    
  • Create a FAT32 system for the EFI partition on the first drive. (Will be cloned to the second drive later.)

    sudo mkfs.fat -F 32 /dev/sda1
    

2.2 Install mdadm and create md device

Install mdadm

  sudo apt-get update
  sudo apt-get install mdadm

Create the md device. Ignore the warning about the metadata since the array will not be used as a boot device.

  sudo mdadm --create /dev/md0 --bitmap=internal --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2

Check the status of the md device.

$ cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdb2[1] sda2[0]
      1047918528 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.0% (1001728/1047918528) finish=69.6min speed=250432K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

In this case, the device is syncing the disks, which is normal and may continue in the background during the process below.

2.4 Partition the md device

  sudo sgdisk -Z /dev/md0
  sudo sgdisk -n 1:0:0 -t 1:E6D6D379-F507-44C2-A23C-238F2A3DF928 -c 1:"Linux LVM" /dev/md0

This creates a single partition /dev/md0p1 on the /dev/md0 device. The UUID string identifies the partition of be an LVM partition.

2.3 Create LVM devices

  • Create a physical volume on the md device

    sudo pvcreate /dev/md0p1
    
  • Create a volume group on the physical volume

    sudo vgcreate vg0 /dev/md0p1
    
  • Create logical volumes (partitions) on the new volume group. The sizes and names below are my choices. You may decide differently.

    sudo lvcreate -Z y -L 25GB --name root vg0
    sudo lvcreate -Z y -L 10GB --name tmp vg0
    sudo lvcreate -Z y -L 5GB --name var vg0
    sudo lvcreate -Z y -L 10GB --name varlib vg0
    sudo lvcreate -Z y -L 200GB --name home vg0
    

Now, the partitions are ready for the Ubuntu installer.

3. Run the installer

  • Double-click on the Install Ubuntu 20.04.2.0 LTS icon on the desktop of the new computer. (Do NOT start the installer via any ssh connection!)
  • Answer the language and keyboard questions.
  • On the Installation type page, select Something else. (This is the important part.) This will present you with a list of partitions called /dev/mapper/vg0-home, etc.
  • Double-click on each partition starting with /dev/mapper/vg0-. Select Use as: Ext4, check the Format the partition box, and choose the appropriate mount point (/ for vg0-root, /home for vg0-home, etc., /var/lib for vg0-varlib).
  • Select the first device /dev/sda for the boot loader.
  • Press Install Now and continue the installation.
  • When the installation is finished, select Continue Testing.

In a terminal, run lsblk. The output should be something like this:

$ lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
...
sda                  8:0    0  1000G  0 disk  
├─sda1               8:1    0   512M  0 part  
└─sda2               8:2    0 999.5G  0 part  
  └─md0              9:0    0 999.4G  0 raid1 
    └─md0p1        259:0    0 999.4G  0 part  
      ├─vg0-root   253:0    0    25G  0 lvm   /target
      ├─vg0-tmp    253:1    0    10G  0 lvm   
      ├─vg0-var    253:2    0     5G  0 lvm   
      ├─vg0-varlib 253:3    0    10G  0 lvm   
      └─vg0-home   253:4    0   200G  0 lvm   
sdb                  8:16   0  1000G  0 disk  
├─sdb1               8:17   0   512M  0 part  
└─sdb2               8:18   0 999.5G  0 part  
  └─md0              9:0    0 999.4G  0 raid1 
    └─md0p1        259:0    0 999.4G  0 part  
      ├─vg0-root   253:0    0    25G  0 lvm   /target
      ├─vg0-tmp    253:1    0    10G  0 lvm   
      ├─vg0-var    253:2    0     5G  0 lvm   
      ├─vg0-varlib 253:3    0    10G  0 lvm   
      └─vg0-home   253:4    0   200G  0 lvm   
...

As you can see, the installer left the installed system root mounted to /target. However, the other partitions are not mounted. More importantly, mdadm is not yet part of the installed system.

4. Add mdadm to the target system

4.1 chroot into the target system

First, we must mount the unmounted partitions:

sudo mount /dev/mapper/vg0-home /target/home
sudo mount /dev/mapper/vg0-tmp /target/tmp
sudo mount /dev/mapper/vg0-var /target/var
sudo mount /dev/mapper/vg0-varlib /target/var/lib

Next, bind some devices to prepare for chroot...

cd /target
sudo mount --bind /dev dev 
sudo mount --bind /proc proc
sudo mount --bind /sys sys

...and chroot into the target system.

sudo chroot .

4.2 Update the target system

Now we are inside the target system. Install mdadm

apt install mdadm

If you get a dns error, do

echo "nameserver 1.1.1.1" >> /etc/resolv.conf 

and repeat

apt install mdadm

You may ignore any warnings about pipe leaks.

Inspect the configuration file /etc/mdadm/mdadm.conf. It should contain a line near the end similar to

ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6 name=ubuntu:0

Remove the name=... part to have the line read like

ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6

Update the module list the kernel should load at boot.

echo raid1 >> /etc/modules

Update the boot ramdisk

update-initramfs -u

Finally, exit from chroot

exit

5. Clone EFI partition

Now the installed target system is complete. Furthermore, the main partition is protected from a single disk failure via the RAID device. However, the EFI boot partition is not protected via RAID. Instead, we will clone it.

sudo dd if=/dev/sda1 of=/dev/sdb1 bs=4096

Run

$ sudo blkid /dev/sd[ab]1
/dev/sda1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="ccc71b88-a8f5-47a1-9fcb-bfc960a07c16"
/dev/sdb1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="fd070974-c089-40fb-8f83-ffafe551666b"

Note that the FAT UUIDs are identical but the GPT PARTUUIDs are different.

6. Insert EFI partition of second disk into the boot chain

Finally, we need to insert the EFI partition on the second disk into the boot chain. For this we will use the efibootmgr.

sudo apt install efibootmgr

Run

sudo efibootmgr -v

and study the output. There should be a line similar to

Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)

Note the path after File. Run

sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l '\EFI\ubuntu\shimx64.efi'

to create a new boot entry on partition 1 of /dev/sdb with the same path as the ubuntu entry. Re-run

sudo efibootmgr -v

and verify that there is a second entry called ubuntu2 with the same path as ubuntu:

Boot0005* ubuntu  HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Boot0006* ubuntu2 HD(1,GPT,fd070974-c089-40fb-8f83-ffafe551666b,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)

Furthermore, note that the UUID string of each entry is identical to the corresponding PARTUUID string above.

7. Reboot

Now we are ready to reboot. Check if the sync process has finished.

$ cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdb2[1] sda2[0]
      1047918528 blocks super 1.2 [2/2] [UU]
      bitmap: 1/8 pages [4KB], 65536KB chunk

unused devices: <none>

If the syncing is still in progress, it should be ok to reboot. However, I suggest to wait until the syncing is complete before rebooting.

After rebooting, the system should be ready to use! Furthermore, should either of the disks fail, the system would use the UEFI partition from the healthy disk and boot ubuntu with the md0 device in degraded mode.

8. Update EFI partition after grub-efi-amd64 update

When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.

8.1 Find out clone source, quick way

If you haven't rebooted after the update, use

mount | grep boot

to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.

8.2 Find out clone source, paranoid way

Create mount points and mount both partitions:

sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1

Find timestamp of newest file in each tree

sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1

Compare timestamps

cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'

Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.

Unmount the partitions before the cloning to avoid cache/partition inconsistency.

sudo umount /tmp/sda1 /tmp/sdb1

8.3 Clone

If /dev/sdb1 was the clone source:

sudo dd if=/dev/sdb1 of=/dev/sda1

If /dev/sda1 was the clone source:

sudo dd if=/dev/sda1 of=/dev/sdb1

Done!

9. Virtual machine gotchas

If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):

FS0:
\EFI\ubuntu\grubx64.efi

The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.

  • 1
    So, I haven't rebooted yet. But this is the most comprehensive guide I've seen. Will let you know if it works on Ubuntu 21.04. – Chaim Eliyah Apr 28 '21 at 09:57
  • p.s., it appears that if you want to add encryption, you just double-click the logical volume at the end and select Use As Physical Device for Encryption (yes I know it's not physical), then it does the dm-crypt for you. – Chaim Eliyah Apr 28 '21 at 10:10
  • So on reboot I just get grub> and it was impossible to assemble the devices. I went and tried without encryption and it fails to install Grub (I think; it's just spinning and crashed). Conclusion: Unless this has to do with nvme, this doesn't work on 21.04 yet. – Chaim Eliyah Apr 29 '21 at 06:00
  • 1
    @ChaimEliyah, does the installation program hang during install? I remember I had that problem with an early version of 20.04. A solution that worked for me was to disconnect the network cable during install. (And fiddle to get the network running afterwards. Weird, but it worked.) – Niclas Börlin Apr 29 '21 at 09:18
  • Yeah I remember that one, I'm not sure if it's the same bug. I'd hoped they'd fixed it in 21.04 but I could try a net-free install. On a temp HD for now. – Chaim Eliyah Apr 29 '21 at 10:00
  • 1
    @ChaimEliyah I have successfully performed the procedure with 21.04 (no encryption) in a virtual machine. Have added that info to the post. – Niclas Börlin Apr 29 '21 at 11:39
  • @ChaimEliyah I read about a UEFI problem and Ubuntu 21.04. Have added text and a link to the post. – Niclas Börlin May 03 '21 at 06:12
  • Thanks, I got through everything except actually booting, I'm still on the temp hard drive I installed (which I'm also turning into a raid, for rescue)... basically all I can't update is initramfs. I think I'm doing it wrong. :p – Chaim Eliyah May 06 '21 at 01:13
  • I got it. If you encounter any error with update-initramfs do a update-initramfs -d -v -k all and then update-initramfs -c -v -k all (man update-initramfs for details). – Chaim Eliyah May 06 '21 at 02:19
  • 1
    Beware of bug https://bugs.launchpad.net/ubuntu/+source/linux-signed-hwe-5.11/+bug/1942935 which may or may not be fixed in your kernel version. – Mikko Rantalainen Sep 13 '21 at 13:33
  • I have a big problem with these instructions and I cannot find any way to correct it. I ran these successfully on a Proliant ML 110. After the system was almost booted I decided I did not like the partition layout. So I tried rerunning the instructions from the beginning. Every time even if I wipe the disks, that I run the commands

    sudo pvcreate /dev/md0p1

    it comes back with an error saying that the vg0 is already created and it will not allow me to destroy it. What in the heck is going on here? ALSO it keeps adding entries in the EFI bios for the Proliant, how do I get rid?

    – Ted Mittelstaedt Dec 10 '21 at 00:38
  • @TedMittelstaedt, I have occasionally encountered similar problems. It appears that both the md devices and the lvm devices have to be wiped in the proper order for them to be completely deleted. I suggest you boot from the Live stick, install mdadm and inspect whatever lvm devices are present. Then you run XXremove, XX=lv, vg, pv until you have nothing left. THEN you reboot and start from the top. Let me know if it does not work out for you. – Niclas Börlin Dec 11 '21 at 17:58
  • Your comprehensive instructions are excellent! If I want to install a fresh version of the OS from a Live ISO, after having setting up RAID1 following your instructions, do I have to do everything from step 3 on wards? Specifically, do I need to execute step 4 (Add mdadm to the target system)? (My data is on a separate partition from the OS, and the goal would be to reinstall the OS without losing the data partition on the RAID array). – Enterprise Oct 16 '23 at 20:46
  • @Enterprise: I interpret your situation as that you have 2 disks (A and B, say) with RAID1 and a 3rd disk (C) that you want to install your new OS on. Is your question if you can install the OS on disk C and attach the RAID1 on disk A+B instead of creating a RAID partitioning as per the instructions? If yes, I believe that is possible. I would probably do some experiments in a virtual machine to make sure I know the necessary commands before I commit. – Niclas Börlin Oct 18 '23 at 05:12
  • Almost, but C is also on the RAID array... I have two physical disks, A and B. Each disk has two partitions: A1+A2, and B1+B2. A1 and B1 are EFI partitions (per your instructions). I created a RAID1 array, lets call it R, using partitions A2 and B2. I created two GPT partitions on R, R1 and R2. R1 is mounted at / (root), and R2 is mounted at /mnt/data. R2 is where I keep all my important files. I installed Ubuntu on R1 (which includes /home, /var, etc.). I would like to be able to reinstall Ubuntu on R1 without losing the important data on R2 (mounted at /mnt/dada). – Enterprise Oct 19 '23 at 01:45
  • I was able to reinstall the OS on the RAID1 array. You must start following the instructions at step 3. After reinstalling the OS (step 3), lsblk will show md127 instead of md0 (or whatever you had previously called your RAID). Therefore, in step 4.1, you will need to mount the new name ( md127, for example, as shown by lsblk). However, when you update your target system, in step 4.2, just use /dev/md/0 as shown above, ignoring the md127 name. After you reboot, the RAID array will be named md0 (or whatever you had previously called your RAID). – Enterprise Oct 29 '23 at 02:18
0

Apologies. My feedback was apparently unclear, so here goes again. The question is "Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?"

My "answer" is that the instructions given in steps 1-7 were both precise and appropriate - many thanks - but I had difficulties with 20.04 because it didn't support the XID 641 onboard graphics of my modern motherboard. I tried with 21.10 desktop and had no problems at all. Note that I switched SATA from RAID to AHCI in the BIOS beforehand and waited for syncing to complete in step 7, otherwise, a painless procedure. The target machine is a Ryzen 9 5950X, ASUS Crosshair VIII Hero motherboard, 2x8TB discs.

0

These are excellent detailed instructions. I just want to add that Desktop installer for 22.10 and 23.04 has no support for raid or LVM. It does not see partitions/file systems created that way. Solution is to switch to Server installer. After server is installed you do "sudo apt install ubuntu-desktop" as well as installing any additional drivers (for example nvidia).

0

If you use Niclas Börlin's answer, consider using rsync instead of dd:

mkdir mnt
sudo mount /dev/sd?1 mnt #whichever of sda1 or sdb1 is not mounted at /boot/efi
sudo rsync -av --delete /boot/efi/ mnt
sudo umount mnt

This makes it impossible to accidentally overwrite the drive contents if you get them mixed up.

Jon Hulka
  • 143