2

I have two problems that I believe are related. One has been happening for years through multiple versions of Ubuntu (at least from 11.10 to 12.04), the other is new since I update the kernel.

First problem first, this is the one that's been happening for years. During a normal boot the system will appear to hang on the "purple" screen, but really it's just at an initramfs prompt behind the purple screen. I usually type "exit" and the boot continues as planned, loading into Ubuntu within a second or two. I've always had the suspicion that this hang was due to my novice setup of a RAID array a couple years ago. Something went wrong, I deleted the array and started over and the second time everything was fine. But, that's about the time that this problem started happening. I've lived with it because I only reboot my computer a couple times a year. I can live with typing "exit" a couple times a year...

Second problem, I updated to the latest Kernel (something I hate doing because it ALWAYS gives me trouble, but the tinkerer in me insists that I get the latest updates...) and now after the first error I type exit and then the system hangs showing numerous warnings none of which I feel have anything to do with the actual problem because none of them are about mounting /dev/md/1 (even though it started as /dev/md1 when I created it the array is now known as /dev/md/1). If I change my /etc/fstab to comment out the line about mounting /dev/md/1 everything will start up OK.

Once Ubuntu loads up I have to stop /dev/md1 (that is not a typo) and then mdadm --assemble --scan will properly start /dev/md/1. I then edit my /etc/fstab so that /dev/md/1 is a device to mount and then run mount -a

Nowhere can I find why /dev/md1 is starting but /dev/md/1 is not.

I don't know how to find the logs for previous boots or else I would put the warnings that I'm seeing during boot...

In order to boot into Ubuntu OK I go to a recovery shell remount the filesystem as read/write, edit /etc/fstab to remove the /dev/md/1 mount and then continue with the boot.

  • I've done more research on this, made some changes, and the problem still exists. Three of my 7 drives in the RAID config had their partitions go completely to the end of the drive. Apparently this can cause issues with the Super-block of the drive: /dev/sdb could have the same super-block as /dev/sdb1 and this can apparently cause issues. I've fixed this, no partitions go to the end of the physical drive there is 2.xx MB at the end of the drive that is non-partitioned. Same problem still exists as initially described. – jerussell Jan 26 '13 at 19:35

1 Answers1

1

https://superuser.com/questions/454864/mdadm-ubuntu-12-04-fails-to-assemble-raid6-during-boot That link gave me the answer I needed. Since three of the drives are on a peripheral SAS card the drives are not loaded by the time mdadm does it's search for the array.

I added sleep 15 as so:

degraded_arrays()
  {
       sleep 15
       mdadm --misc --scan --detail --test >/dev/null 2>&1
       return $((! $?))
  }

in /usr/share/initramfs-tools/scripts/mdadm-functions like the link suggests and then updated initramfs and now it boots successfully with the array started and mounted properly.