2

I have been experimenting with installing CentOS (7 Server with GUI) and Ubuntu (16.04.4 & 17.10 Server and Desktop) on two identical Hosts.

Host 1 = CentOS7 Server as host to 2 x VMWare 14 Pro Clients

Host 2 = Ubuntu 16/17 Server as host to 2 x VMWare 14 Pro Clients

On both Hosts: Client 1 = OpenMediaVault4 & Client 2 = ownCloud 10

Both Hosts are self-assembled Asroc Rack C236-WSI Server MoBo configured with i7 CPU's & 32GB RAM.

On each HW Host the OS's are installed on two identical HW consisting of 2xSSD's with Raid1 configs.

The Data in the Linux soft raid 10 hosts are replicated with rsync between the hosts.

For the Ubuntu 17.10 I used the following description. The original how-to was initially done on 14.04 and tested on 16.04.4:

How to install Ubuntu 14.04/16.04 64-bit with a dual-boot RAID 1 partition on an UEFI/GPT system?

Everything worked exactly as described with the Ubuntu 17.10 Desktop, up to "8. Enable boot from the second SSD -- reboot". It may be that there are differences between Ubuntu 14.x, for which the how-to was originally written and tested on 16.04.4. I decided to use Ubuntu 17.10 Desktop.

At the moment, the Ubuntu 17.10 reboot drops into initramfs demanding a fsck for the root partition. Doing a fsck is successful, but loops back to the initramfs on reboot.

I would like to get this host installation to work with 17.10, so I can compare the remote CentOS7 host with it.

Otherwise, if I cannot solve the problem with the solution in the link above, can anyone suggest a Raid1 boot configuration on newer Ubuntu Server/Workstations, or is it advisable to stick to Ubuntu 16.04.4 (or even 14.40)?

TIA :-)

Pingumann
  • 21
  • 5
  • It seems that a problem occurs with the partitioning of SSD's in a raid1 config. Setting up the partitioning by hand with sgdisk and mdadm works, but as soon as anaconda is used of ubuntu 16.04.4 and ubuntu 17.10, the partitioning of a non-raid partition for /boot and /boot/efi together with a raid1 for / and swap, all sorts of problems crop up. – Pingumann Mar 15 '18 at 14:19
  • With further installation attempts, sometimes the config gets accepted. But often just formatting an empty raid partition with ext4 just did not work, but got it working after first formatting it as xfs, then again back to ext4. Not once was a successful boot partition installable on the SSD's at the end of the installation provess. There was just en error message that the boot intallation of grub failed. No fürther info is given. Rather discouriging to say the least. – Pingumann Mar 15 '18 at 14:21
  • In a moment of dispair, booting the wrong usb-boot image, CentOS7 installed without any issues with the same config of the SSD's Ubuntu could not handle. Below the partitioning: – Pingumann Mar 15 '18 at 14:53
  • All are gpt | _– Device – Drive – Descr. – _ | (7 X) /dev/sd[a-g] – 2 TB Samsung HDD – Raid10 member, (md0) | /dev/sdh – 50 GB Drevo X1 SSD – sdh1 (boot-main), sdh2 (efi-main) & Raid1: sdh3 (md1) & sdh4 (md2) | /dev/sdi – 50 GB Drevo X1 SSD – sdi1 (boot-alt), sdi2 (efi-alt) & Raid1: sdi3 (md1) & sdi4 (md2) | – Pingumann Mar 15 '18 at 15:15

0 Answers0