0

Since my development server got into Kernel Panic, I ended up reinstalling it. I thought instead of manually installing everything again, I will get a more alike configuration to the production server if I restore a full server backup from production to development using my cloud backup solution.

So the OS is Ubuntu 16.04 on both servers, and although the restoration went well, as I was changing SSH configuration and rebooted the development server I discovered a new important thing that needs to be fixed - Disk partition UUIDs were wrong, as the filesystem was restored from a different physical device.

I was able to boot into GRUB (don't know what version, couldn't find how to check that), but in my searches found no use for that, but instead came to believe that I need to reinstall GRUB. I successfully booted into the Rescue kernel on the development server and was carefully trying to understand and learn how to do this GRUB reinstall properly.

The best resource I found was this StackExchange post, but as my attempts didn't match my expectations, I realised that this post is not about RAID1, which is what I have. Furthermore, I believe that I have a dedicated GRUB partition.

I am unable to mount /dev/sda# and /dev/sdb# partitions, I get a message like mount: unknown filesystem type 'linux_raid_member'.

So what I have (I think) understood from the fdisk -l output below is that:

  • /dev/sda1-3 are the main partitions for the OS
  • /dev/sdb1-3 are the mirror RAID partitions for sda
  • /dev/md127 is the Multi-Device partition that represents the RAID1 that consists of sda and sdb
  • /dev/md126 I am guessing is the dedicated GRUB partition as I have noticed people make 1 MB partitions for that

Output of fdisk -l in Rescue 64bit 4.x-Kernel:

Disk /dev/ram0: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram1: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram2: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram3: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram4: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram5: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram6: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram7: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram8: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram9: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram10: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram11: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram12: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram13: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram14: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram15: 80 MiB, 83886080 bytes, 163840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop0: 77 MiB, 80732160 bytes, 157680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcdb133c6

Device     Boot    Start       End   Sectors   Size Id Type
/dev/sda1           2048   2050047   2048000  1000M fd Linux raid autodetect
/dev/sda2        2050048  10049535   7999488   3.8G 82 Linux swap / Solaris
/dev/sda3       10049536 468860927 458811392 218.8G fd Linux raid autodetect

Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x41d48153

Device     Boot    Start       End   Sectors   Size Id Type
/dev/sdb1           2048   2050047   2048000  1000M fd Linux raid autodetect
/dev/sdb2        2050048  10049535   7999488   3.8G 82 Linux swap / Solaris
/dev/sdb3       10049536 468860927 458811392 218.8G fd Linux raid autodetect

Disk /dev/md127: 218.8 GiB, 234911236096 bytes, 458811008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md126: 1000 MiB, 1048510464 bytes, 2047872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

I am reluctant to simply mount /dev/md126 and install GRUB using that because I see other guides doing something with the mdam utility and handling all these various partitions - not nearly as simple and short as without RAID.

This guide for RAID1 does not mention a dedicated GRUB partition and does not even run the GRUB install, so I doubt it suits my situation?

So how should I properly handle this situation and set GRUB and RAID1 back in place on this development server (cloned from a separate production server)? Thank you for your answers, much appreciated!

George Udosen
  • 36,677
a_rts
  • 101
  • I have not worked on a RAID system, so unsure if this applies, but on the systems I am familiar with, you would have to fix /etc/fstab to have the correct UUIDs. – Organic Marble Aug 25 '17 at 11:01
  • You may be right. I have the UUIDs from blkid on both the production server and the development server, but I didn't find the /boot/grub/ directory on the development server, so I copied it over from production since I'm thinking I just didn't include it in the backup. Now the question is what do I do with these UUIDs to fix them for the boot? The fstab file does not actually have any UUIDs, just device names as usual... Also, production has md0-3, sdc1 and sdd1 (maybe something to do with the fact that production has an additional hard drive), while development has md127, md126 and loop0. – a_rts Aug 25 '17 at 13:10
  • From what I've found so far I think I'm supposed to change the /dev/sd# in fstab to UUID=... and run update-grub, but how can I actually run this command for the mounted Linux drive if I am currently in the Recovery kernel and cannot boot past GRUB on the main kernel? – a_rts Aug 25 '17 at 13:16
  • Not an expert on servers, but I would boot into a live environment using USB and make the changes. Also, my fstab already had UUIDs, so I cannot say that the devices listed in yours are incorrect. – Organic Marble Aug 25 '17 at 13:30
  • See my answer to this question: https://askubuntu.com/questions/915179/fstab-edit-crashed-system/915190#915190 – Organic Marble Aug 25 '17 at 13:32
  • These servers are hosted in another country, so plugging in a USB or inserting a disk of any sort are not available options, unfortunately. I can do effectively the same as with a USB live system, though, through the Rescue Kernel, so I can update fstab for the main kernel. Is just editing the fstab going to be enough? I'm about to try that, but I'm thinking that GRUB needs to be updated with these UUIDs as well, and I don't know how. – a_rts Aug 25 '17 at 13:43

0 Answers0