I have been experimenting with installing CentOS (7 Server with GUI) and Ubuntu (16.04.4 & 17.10 Server and Desktop) on two identical Hosts.
Host 1 = CentOS7 Server as host to 2 x VMWare 14 Pro Clients
Host 2 = Ubuntu 16/17 Server as host to 2 x VMWare 14 Pro Clients
On both Hosts: Client 1 = OpenMediaVault4 & Client 2 = ownCloud 10
Both Hosts are self-assembled Asroc Rack C236-WSI Server MoBo configured with i7 CPU's & 32GB RAM.
On each HW Host the OS's are installed on two identical HW consisting of 2xSSD's with Raid1 configs.
The Data in the Linux soft raid 10 hosts are replicated with rsync between the hosts.
For the Ubuntu 17.10 I used the following description. The original how-to was initially done on 14.04 and tested on 16.04.4:
How to install Ubuntu 14.04/16.04 64-bit with a dual-boot RAID 1 partition on an UEFI/GPT system?
Everything worked exactly as described with the Ubuntu 17.10 Desktop, up to "8. Enable boot from the second SSD -- reboot". It may be that there are differences between Ubuntu 14.x, for which the how-to was originally written and tested on 16.04.4. I decided to use Ubuntu 17.10 Desktop.
At the moment, the Ubuntu 17.10 reboot drops into initramfs demanding a fsck for the root partition. Doing a fsck is successful, but loops back to the initramfs on reboot.
I would like to get this host installation to work with 17.10, so I can compare the remote CentOS7 host with it.
Otherwise, if I cannot solve the problem with the solution in the link above, can anyone suggest a Raid1 boot configuration on newer Ubuntu Server/Workstations, or is it advisable to stick to Ubuntu 16.04.4 (or even 14.40)?
TIA :-)