15

I used to install my servers with LVM over software RAID1, and grub install on the MBR of both drives. Now I have an UEFI server, and the compatibility (BIOS) mode does not seem to work.

So I went the way of installing with UEFI.

First test, single drive installation works fine.

Then I tried to install with RAID1 + LVM. I partionned my two drives the same way:

  • an EFI system partition, 200MB
  • a physical RAID partition

Then I did setup: - a RAID 1 using both disks RAID partitions - a LVM volume group on the RAID 1 array - three logical volumes: /, /home and swap

The installation went on, but on reboot, I get a grub shell and I am stuck.

So, is it possible to have grub2-efi work on LVM over RAID1 ? What is the way to achieve this ? Are there other bootloader alternatives (direct linux loading from EFI ??) ? etc...

Tim
  • 32,861
  • 27
  • 118
  • 178
alci
  • 5,839

3 Answers3

12

Ok, I found the solution and can answer my own questions.

1) can I use LVM over RAID1 on a UEFI machine ?

Yes, definitely. And it will be able to boot even if one of the two disks fails.

2) How to do this ?

The're seem to be a bug in the installer, so just using the installer results in a failure to boot (grub shell).

Here is a working procedure:

1) manually create the following partitions on each of the two disks: - a 512MB partition with type UEFI a the beginning of the disk - a partition of type RAID after that

2) create your RAID 1 array with the two RAID partitions, then create your LVM volume group with that array, and your logical volumes (I created one for root, one for home and one for swap).

3) let the install go on, and reboot. FAILURE ! You should get a grub shell.

4) it might be possible to boot from the grub shell, but I choosed to boot from a rescue usb disk. In rescue mode, I opened a shell on my target root fs (that is the one on the root lvm logical volume).

5) get the UUID of this target root partition with 'blkid'. Note it down or take picture with your phone, you'll need it next step.

6) mount the EFI system partition ('mount /boot/efi') and edit the grub.cfg file: vi /boot/efi/EFI/ubuntu/grub.cfg Here, replace the erroneous UUID with the one you got at point 5. Save.

7) to be able to boot from the second disk, copy the EFI partition to this second disk: dd if=/dev/sda1 of=/dev/sdb1 (change sda or sdb with whatever suits your configuration).

8) Reboot. In your UEFI setting screen, set the two EFI partitions as bootable, and set a boot order.

You're done. You can test, unplug one or the other of the disks, it should work !

0x2b3bfa0
  • 8,780
  • 6
  • 36
  • 55
alci
  • 5,839
  • Thanks for the detailed procedure. Please could you indicate your Boot-Info ? ( https://help.ubuntu.com/community/Boot-Info ) – LovinBuntu Oct 10 '13 at 21:25
  • @LovinBuntu Here is the output of Boot-Info, started from a usb key: http://paste.ubuntu.com/6223137/ – alci Oct 11 '13 at 16:28
  • I got the failure, but couldn't get the grub shell. – Peter Lawrey Feb 26 '14 at 13:35
  • @PeterLawrey maybe you can boot with a rescue usb disk as I did (see point 4) – alci Feb 27 '14 at 09:15
  • @acli amazingly, my system doesn't support boot from USB. :| I tried boot from CD but had trouble mounting the drives again. I suspect I need to install from the Try Ubuntu, and fix the drive before restarting. I have rebuilt without LVM and RAID and it works fine. – Peter Lawrey Feb 27 '14 at 09:25
  • Thank you for pointing me in the right direction regarding how to include the redundant EFI boot partiotion in the overall RAID concept. Regarding the GRUB problem, it seems to be gone as of 14.04. – Run CMD Jul 24 '14 at 05:51
  • 'bkid' should be 'blkid' – Quantum7 Sep 08 '14 at 13:09
  • Hi @alci, I would like you to ask for a little more detail: In step 5), which uuid do you get in case the "/boot" and the "/" (root) are on different partitions? The /boot is not LVM in my case, and the root is. – dmeu May 21 '15 at 11:30
  • @dmeu I'm not sure. I would choose /boot, but you'll have to try... sorry – alci May 21 '15 at 12:19
  • @alci, Hi i just tried that. The problem is that I get this error, that the sector could not be read – dmeu May 21 '15 at 12:22
  • 4
    I have just wasted a couple of days trying to follow a similar procedure, mainly on account of my being stupid, but just in case this could help someone else avoid the same problem, I'll mention that you need to make sure to boot the live USB using UEFI rather than legacy BIOS. (My MB, on 'auto' setting, preferred to boot in legacy mode. I had to turn it off--or manually choose to boot the EFI option--to get the installation to work.) – Jonathan Y. Oct 17 '15 at 15:41
  • 1
    Using Ubuntu 16.04.1, this doesn't seem a problem any more. I set up a 512MB EFI Partition, a SWAP, and a 490GB RAID partiton for RAID1, and on the new md device I installed Ubuntu 16.04 server completely without problems. After rebooting, it started the new system flawless, no need to mess with the EFI partition, fstab etc. – nerdoc Dec 10 '16 at 22:40
  • almost 2019 and i hit this same problem. The issue persists with LVM and Ubuntu 18.04 If history repeats itself, 15 years of Ubuntu has shown me that Ubuntu is a tragic example of that. :D – Abhishek Dujari Dec 23 '18 at 08:03
4

I did this a little over a year ago myself, and, while I did have problems, didn't have the problems listed here. I'm not sure where I found the advice I did at the time, so I'll post what I did here.

1) Create 128MB efi partitions at the start (only one of which will mount, at /boot/efi)

2) Create 1 GB /boot RAID1 array, no LVM

3) Create large RAID1 array using LVM

Having /boot be on a separate partition/RAID1 array solves the issues of the efi partition being unable to find the appropriate things.

And for those looking for more detail, as I was at the time, this is, more precisely, how I have done my setup:

6x 3TB Drives

Have 4 RAID arrays:
/dev/md0 = 1GB RAID1 across 3 drives
   --> /boot (no LVM)
/dev/md1 = 500GB RAID1 across 3 drives
   LVM:
      --> /     =  40GB
      --> /var  = 100GB
      --> /home = 335GB
      --> /tmp  =  10GB

/dev/md2 = 500GB RAID1 across 3 drives (for VM's/linux containers)
   LVM:
      --> /lxc/container1 =  50GB
      --> /lxc/container2 =  50GB
      --> /lxc/container3 =  50GB
      --> /lxc/container4 =  50GB
      --> /lxc/extra      = 300GB (for more LXC's later)

/dev/md3 = 10TB RAID6 across 6 drives (for media and such)
   --> /mnt/raid6 (no LVM)


Disks are setup thus:

/sda => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdb => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdc => /boot/efi (128 MB) | /dev/md0 (1 GB) | /dev/md1 (500GB) | /dev/md3 (2.5TB)
/sdd => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)
/sde => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)
/sdf => ----- left empty for simplicity ---- | /dev/md2 (500GB) | /dev/md3 (2.5TB)

Note only one of the /boot/efi will actually mount, and the second two are clones; I did this because I waned to be able to have the machine still boot when losing any one of the 3 disks in the RAID1. I don't mind running in degraded mode if I still have full redundancy, and that gives me time to replace the drive while the machine still is up.

Also, if I did not have the second RAID1 array for putting the LXC containers and basically all the databases and such, /var would have to have been MUCH bigger. Having each LXC as its own logical volume was, however, a nice solution to prevent one VM/website from disrupting the others due to out-of-control error logs, for example...

And final note, I installed from the Ubuntu Alternate Install USB with 12.04.01 (before 12.04.02 came out), and everything worked quite nicely. After banging my head against it for 72 hours.

Hope that helps somebody!

  • 1
    grub2 handles booting lvm on md directly without a /boot partition just fine, and has for a few years at least. – psusi Jan 23 '14 at 19:52
  • @psusi I wish you were right, my fresh install won't boot from the second disk by itself. All LVM, unlike jhaagsma 's setup. – sjas May 03 '15 at 18:31
2

I had the same probem, efi boot with two disks and software raid

/dev/sda

  • /dev/sda1 - 200MB efi partition
  • /dev/sda2 - 20G physical for raid
  • /dev/sda3 - 980G physical for raid

/dev/sdb

  • /dev/sdb1 - 200MB efi partition
  • /dev/sdb2 - 20G physical for raid
  • /dev/sdb3 - 980G physical for raid

Swap on /dev/md0 (sda2 & sdb2) Root on /dev/md1 (sda3 & sdb3)

If you enter the grub-rescue shell, boot using:

set root=(md/1)
linux /boot/vmlinuz-3.8.0-29-generic root=/dev/md1
initrd /boot/initrd.img-3.8.0-29-generic
boot

After that, download this patch file - https://launchpadlibrarian.net/151342031/grub-install.diff (as explained on https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1229738)

cp /usr/sbi/grub-install /usr/sbi/grub-install.backup
patch /usr/sbin/grub-install patch
mount /dev/sda1 /boot/efi
grub-install /dev/sda1
umount /dev/sda1
mount /dev/sdb1 /boot/efi
grub-install /dev/sdb1
reboot
Ljupco
  • 21
  • 1