0

I've used the ubuntu server installation image to install onto a RAID 1, which is two disks mirrored for redundancy.

My configuration is as follows:

/dev/sda - 500GB
/dev/sda1 - 1GB, EFI system partition which mounts to /boot/efi
/dev/sda2 - 499GB, RAID Member

/dev/sdb - 500GB
/dev/sdb1 - 1GB, EFI system partition (not currently mounted)
/dev/sdb2 - 499GB, RAID Member

/dev/md0 - 499GB RAID Array
a little unpartitioned free space
/dev/md0p2 - 256MB ext2 filesystem which mounts to /boot
/dev/md0p3 - 498GB Linux LVM2 physical volume

and then the LVM2 volume is composed of
/dev/ubuntu-vg/root - 494GB ext4 filesystem which mounts to /
/dev/ubuntu-vg/swap_1 - 4GB swap file

This boots fine. I am using a mac mini though. I'm a little confused because when I press the "alt" key during bootup, it doesn't show any ubuntu boot devices to choose from, but if I don't press the alt key during bootup, it boots ubuntu. ----- I don't think this is related at all to my problem, but just noteworthy. Probably just something weird with apple's EFI implementation.

My concern though is if disk sda fails, what happens? Is anything in /boot/efi ever used anymore after the system is running? If I reboot and only disk sdb is working, will it boot? I don't think so because when I mount /dev/sdb1 to see what is there, it is empty. What happens if part of disk sda goes bad (making partition sda2 junk), but the partition sda1 is still good, will ubuntu boot to the raid member in sdb2? How can I check this?

I have seen several references suggest I should run

grub-install /dev/sdb

to install to the second drive. Some of these references are:

https://help.ubuntu.com/community/Installation/SoftwareRAID
http://kudzia.eu/b/2013/04/installation-of-debian-wheezy-on-mdadm-raid1-gpt/
http://elblogdedually.blogspot.com/2015/02/how-to-install-ubuntumint-on-software.html

However, I think most of those are talking about for BIOS/MBR configurations because when I run that command, the partition /dev/sdb1 stays empty and the files in /dev/sda1 are modified (I've looked at the mounted version in /boot/efi).

I've seen another reference (How to install Ubuntu server with UEFI and RAID1 + LVM) that says I should run

dd if=/dev/sda1 of=/dev/sdb1

which seems like it should work, although it's hard for me to unmount /dev/sda1 to do that (do I have to?), and as I mentioned above, I don't how the RAID is referenced, so I don't know what is going to happen if one of the members had failed.

And then the other question I have is, once I figure out the right way to duplicate the EFI system partition on both disks, how often do I need to update it? It really seems like I should not have to worry about this at all, but I think I do. Apple's RAID system does allow disk to boot without worrying about this kind of thing...why can't ubuntu's be that easy?

user1748155
  • 153
  • 2
  • 9

1 Answers1

1

Re copying the partition, you should be able to boot into a Ubuntu Live CD/USB and clone it from there. Furthermore, you need to insert the ESP into the boot chain. For details, I wrote a detailed instruction here.

  • Also, any reason why you chose to setup a separate RAID for / and swap? – user1748155 Aug 12 '15 at 20:45
  • When you run efibootmgr, does that modify all disks, or just /dev/sdb? – user1748155 Aug 12 '15 at 20:48
  • Your solution is interesting and well written up!! Is it a problem though if /dev/sda1 and /dev/sdb1 have the same UUID? – user1748155 Aug 12 '15 at 20:58
  • To be honest, the reason I chose two RAIDs with one partition (swap and /, respectively) each instead of one RAID with two partitions was...habit. And my sysadm at work usually choose that solution, so I might as well... My impression is that the difference between the two choices is marginal. – Niclas Börlin Aug 13 '15 at 17:48
  • I am not sure efibootmgr modifies any disk. The man page says: -d | --disk DISK The disk containing the loader (defaults to /dev/sda) I interpret containing as that the loader should already be there, but I maybe wrong. At any rate, I cannot see why efibootmgr should modify anything more than the -d drive (if even that). If you wonder why my instructions do not contain any explicit efibootmgr -d /dev/sda instruction, it is executed by grub-install (if I recall correctly). – Niclas Börlin Aug 13 '15 at 17:54
  • @user1748155: Re the UUID question, if you follow my instructions, the /dev/sda1 and /dev/sdb1 partitions should have a different UUID (as seen in /dev/disk/by-partuuid), but the same UUID as reported by blkid (confusing, yes...)!

    In my system (that was configured and tested as per my instructions), I have:
    $ ls -la /dev/disk/by-partuuid | grep 'sd[ab]1' lrwxrwxrwx 1 root root 10 Aug 13 12:23 48...0f -> ../../sdb1 lrwxrwxrwx 1 root root 10 Aug 13 12:23 a9...2b -> ../../sda1 $ blkid /dev/sd[ab]1 /dev/sda1: UUID="9E78-F2D6" TYPE="vfat" /dev/sdb1: UUID="9E78-F2D6" TYPE="vfat"

    – Niclas Börlin Aug 13 '15 at 18:06
  • @user1748155: Thanks for the feedback, I have update my answer with more info about the UUIDs. :) – Niclas Börlin Aug 13 '15 at 18:28
  • Okay, so do you think that efibootmgr may modify some kind of NVRAM in the UEFI chip? If so, then this action will not persist if the RAID is moved to a new system then (maybe some other hardware component failed, but not the disks)? – user1748155 Aug 18 '15 at 05:43
  • So, you are saying that the file system and the partition each have a separate UUID? Since /dev/sdb1 is a partition that was already created by a partition editor (and you are writing data into that partition, not creating it with dd), then its unique partition UUID was created when the partition was created? – user1748155 Aug 18 '15 at 05:47
  • Regarding the two RAIDs for swap and /, I'm thinking the better solution is to use one RAID. I've seen other write ups that also suggest two RAIDs. The problem is it is more complex that way. It's also not moving towards the future use of the LVM instead of partitions. From what I have read, LVM is more flexible with resizing, etc.. A non-RAID install also seems to require LVM for encrypted disks. – user1748155 Aug 18 '15 at 05:55
  • efibootmgr: Yes, that is my understanding. However, if your new machine supports "Run UEFI application" or similar in BIOS you should be able to boot into your system and run efibootmgr to install your disks into the boot order. Alternatively, you could boot a live CD/USB, install grub-efi and run efibootmgr from there. UUID: Apparently. Definitely for FAT partitions, where the UUID should rather be called "serial number". Also see http://ubuntuforums.org/showthread.php?t=1240146. – Niclas Börlin Aug 18 '15 at 12:06
  • RAID: The only advantage of 2xRAID I can think of is if somehow the disk area inside one partition breaks, say sdb2, (can this happen?), then 2xRAID would leave you with one degraded RAID and one complete. A 1xRAID will leave you with a degraded array with two partitions. Since you would want to replace the disk in both cases, as I said, the difference is probably minor. Re LVM, etc., have you considered zfs? My current config is / on RAID1 and /bigdata on zfs over 2x4TB spindisks. In theory it looks excellent unless you want to shrink partitions. – Niclas Börlin Aug 18 '15 at 12:18
  • So, since you say that the efibootmgr command probably only interacts with the NVRAM in the UEFI chip and actually doesn't touch the disk at all, then maybe that step is not really necessary and I could just manually do it in my bios/uefi setup menu, if that is an option in there? I don't have a dual boot system, so if my bios/uefi just chooses whatever it finds first, I probably care about doing this even less? – user1748155 Aug 22 '16 at 20:02
  • In your write up, I guess it's possible to use a variation of step 7 instead of step 8? You mainly just did it in step 8 with the dd command as a shortcut to avoid having to rebuild the filesystem on the ESP too, since we don't care that the UUID of the filesystem is identical? Or maybe you did it that way because the UUID of the filesystem has to be identical since fstab can only accept one value for the /boot/efi mount? Although you said in your comments that the ESP never changes with system upgrades, that may not be true and I'm trying to come up with a more systematic ESP upgrade workflow – user1748155 Aug 22 '16 at 22:11
  • @user1748155: If your bios has support for installing ESPs into the boot chain, that solution should also work. Since BIOSes vary between systems, I chose the efibootmgr commands to get an instruction that was portable between systems. – Niclas Börlin Aug 23 '16 at 07:02
  • @user1748155: Re your step 7/8 question, are you saying that you would like to repeat step 7 on /dev/sdb1? I guess so (haven't tried)...but you'd also have to format /dev/sdb1 (done in step 2 for /dev/sda1) and somehow copy the FAT32 UUID (currently 9E78-F2D6 on my system) to /dev/sdb1 before grub-install. And if you want a blank FAT32 system with an identical UUID, dd does both in one command. – Niclas Börlin Aug 23 '16 at 07:17
  • I've been doing some testing now after extensive reading and it looks like my BIOS pretty much automatically generates a list of valid ESP that it finds and then lets you choose them. I believe it just uses whatever one you booted last if you don't invoke the boot menu. The efibootmgr commands are useful to know, but you may want to indicate in your tutorial that they may be optional, depending on your BIOS. – user1748155 Aug 24 '16 at 21:31
  • I guess you are right about step 7/8. Maybe my suggestion was not a good one considering the UUID needs to be the same for the FAT32 filesystem in order for the fstab to always find it, regardless of what disks are still working. – user1748155 Aug 24 '16 at 21:36
  • Also, regarding your sleepAwhile script. This seemed to not work for me. I am now using ubuntu 16.04 and I can't boot with a degraded RAID. I also tried the rootdelay= that you mentioned in your tutorial. I am able to manually boot using the commands in this comment: http://serverfault.com/questions/688207/how-to-auto-start-degraded-software-raid1-under-debian-8-0-0-on-boot#comment938015_693707 . I didn't try the other permanent solutions mentioned in that answer because they looked like even more hackery, and I'm okay typing two commands right now to get it booting. – user1748155 Aug 24 '16 at 21:47
  • Also, regarding my BIOS that does not need the efibootmgr commands, I've now ditched the mac mini mentioned in my original question in favor of an ASUS x99 motherboard. – user1748155 Aug 24 '16 at 21:50