2

How can I assemble a RAID5 array using mdadm, if my disks are actually partitions, and some of them are image files, rather than actual disks? I'm using Ubuntu 18.04

I have an old RAID 5 array that I want to recover. It once consisted of three 2 TB disks, each with a single 2TB partition on it. I have two of them as image files (created with dd), and one as the actual HDD. I was hoping to access the RAID5 array's contents, but I'm not able to even reassemble the array, let alone mount it. Here's what I tried:

Create loop devices for the image files

losetup -Pf image1.iso
losetup -Pf image2.iso

Create a custom ~/raid-mdadm.conf

DEVICE /dev/sdc1
DEVICE /dev/loop17p1
DEVICE /dev/loop33p1

Try to run mdadm --assemble

mdamd --assemble --scan --verbose --config=~raid-mdadm.conf

however, this fails with the following error:

mdadm: looking for devices for further assembly
mdadm: Merging with already-assembled /dev/md/0
mdadm: cannot re-read metadata from /dev/dm-8 - aborting
double free or corruption (!prev)
Aborted (core dumped)

If I don't specify my custom --config option, or if I use --config=partitions, the output shows that it doesn't actually consider /dev/sdc1, /dev/loop17p1, or /dev/loop33p1 in the --scan phase.

PS: If you're wondering why there's these partitions involved, don't ask me. I don't remember why I decided that over 10y ago. If you're wondering why I don't either have all disks, or all images, this is because my computer doesn't seem to want to recognize more than one disk at at time, and I don't have enough free storage for a 3rd image and the data I want to recover.

PS2: I'd also be happy to reassemble my RAID array using something other than mdadm

derabbink
  • 234

1 Answers1

0

Use the man mdadm command and check the integrity of the array, use mdadm --zero-superblock --force to clean up superblocks from failed attempts that may already contain service information.

For a 6-disk RAID-5 with 1TB disk, the failure rate due to BER is estimated at 4-5%, and 4-disk for 4TB disk, it will already reach 16-20%. And if you use BER (Bit Error Rate), further increases the probability of failure during a massive read of the entire disk volume. RAID-5 this was the first and final error. I can already see it here: /dev/md/0 vs /dev/md0. Check your `raid-mdadm.conf ' file.

  • I'm afraid I don't know what this means: "RAID-5 this was the first and final error". If you're saying I shouldn't have picked RAID5 to begin with, then I'm afraid I can't do much about that past decision now. – derabbink Mar 12 '21 at 09:22
  • Okay, but given the possible RAID-5 failures and looking at the output /dev/md/0... After all, it is not clear, it is assembled in this way or it is only an error in the configuration file. It block /0 now. –  Mar 12 '21 at 10:02
  • I have provided the full config file that I have. It contains only those 3 lines. The reason I created that ~/raid-mdadm.conf file was to instruct mdadm to use the correct partitions, because without it, it would not look at those. – derabbink Mar 12 '21 at 16:49
  • Check out this post, maybe it will help you.... https://askubuntu.com/questions/1299978/install-ubuntu-20-04-desktop-with-raid-1-and-lvm-on-machine-with-uefi-bios The path to the /etc/_.conf file is specified there. Target databases that you are trying to create.... and much more. –  Mar 12 '21 at 16:53