I've used the ubuntu server installation image to install onto a RAID 1, which is two disks mirrored for redundancy.
My configuration is as follows:
/dev/sda - 500GB
/dev/sda1 - 1GB, EFI system partition which mounts to /boot/efi
/dev/sda2 - 499GB, RAID Member
/dev/sdb - 500GB
/dev/sdb1 - 1GB, EFI system partition (not currently mounted)
/dev/sdb2 - 499GB, RAID Member
/dev/md0 - 499GB RAID Array
a little unpartitioned free space
/dev/md0p2 - 256MB ext2 filesystem which mounts to /boot
/dev/md0p3 - 498GB Linux LVM2 physical volume
and then the LVM2 volume is composed of
/dev/ubuntu-vg/root - 494GB ext4 filesystem which mounts to /
/dev/ubuntu-vg/swap_1 - 4GB swap file
This boots fine. I am using a mac mini though. I'm a little confused because when I press the "alt" key during bootup, it doesn't show any ubuntu boot devices to choose from, but if I don't press the alt key during bootup, it boots ubuntu. ----- I don't think this is related at all to my problem, but just noteworthy. Probably just something weird with apple's EFI implementation.
My concern though is if disk sda fails, what happens? Is anything in /boot/efi ever used anymore after the system is running? If I reboot and only disk sdb is working, will it boot? I don't think so because when I mount /dev/sdb1 to see what is there, it is empty. What happens if part of disk sda goes bad (making partition sda2 junk), but the partition sda1 is still good, will ubuntu boot to the raid member in sdb2? How can I check this?
I have seen several references suggest I should run
grub-install /dev/sdb
to install to the second drive. Some of these references are:
https://help.ubuntu.com/community/Installation/SoftwareRAID
http://kudzia.eu/b/2013/04/installation-of-debian-wheezy-on-mdadm-raid1-gpt/
http://elblogdedually.blogspot.com/2015/02/how-to-install-ubuntumint-on-software.html
However, I think most of those are talking about for BIOS/MBR configurations because when I run that command, the partition /dev/sdb1 stays empty and the files in /dev/sda1 are modified (I've looked at the mounted version in /boot/efi).
I've seen another reference (How to install Ubuntu server with UEFI and RAID1 + LVM) that says I should run
dd if=/dev/sda1 of=/dev/sdb1
which seems like it should work, although it's hard for me to unmount /dev/sda1 to do that (do I have to?), and as I mentioned above, I don't how the RAID is referenced, so I don't know what is going to happen if one of the members had failed.
And then the other question I have is, once I figure out the right way to duplicate the EFI system partition on both disks, how often do I need to update it? It really seems like I should not have to worry about this at all, but I think I do. Apple's RAID system does allow disk to boot without worrying about this kind of thing...why can't ubuntu's be that easy?
efibootmgr
modifies any disk. The man page says:-d | --disk DISK The disk containing the loader (defaults to /dev/sda)
I interpret containing as that the loader should already be there, but I maybe wrong. At any rate, I cannot see whyefibootmgr
should modify anything more than the-d drive
(if even that). If you wonder why my instructions do not contain any explicitefibootmgr -d /dev/sda
instruction, it is executed bygrub-install
(if I recall correctly). – Niclas Börlin Aug 13 '15 at 17:54/dev/sda1
and/dev/sdb1
partitions should have a different UUID (as seen in/dev/disk/by-partuuid
), but the same UUID as reported byblkid
(confusing, yes...)!In my system (that was configured and tested as per my instructions), I have:
– Niclas Börlin Aug 13 '15 at 18:06$ ls -la /dev/disk/by-partuuid | grep 'sd[ab]1' lrwxrwxrwx 1 root root 10 Aug 13 12:23 48...0f -> ../../sdb1 lrwxrwxrwx 1 root root 10 Aug 13 12:23 a9...2b -> ../../sda1 $ blkid /dev/sd[ab]1 /dev/sda1: UUID="9E78-F2D6" TYPE="vfat" /dev/sdb1: UUID="9E78-F2D6" TYPE="vfat"