I have been using a raid 5 array as a NAS for past two years that I have created with the help of this guide. But recently I had to swap the motherboard of this system because the previous one stopped working. I have 1 nvme and 3 hdds in this system. Nvme is used as the system drive while 3 hdds used as the raid storage. System is booting fine into OS (without a fresh install of Ubuntu) after the motherboard is replaced. But the issue is raid array is not working anymore.
I have no knowledge about Linux systems to recover this array. Things I already tried;
Managed to recover data by just rebuilding the array from scratch. I did a fresh install of OMV and wiped 3 hdds using gui. Then created a new raid 5 array and it started resyncing. After resyncing is done files are accessible again.
sudo mdadm --assemble /dev/md/NAS:0 /dev/sda /dev/sdb /dev/sdc
Output;
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: /dev/sda has no superblock - assembly aborted*
*This output is same to sda, sdb and sdc
cat /proc/mdstat
Output;
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: none
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Output;
NAME SIZE FSTYPE TYPE MOUNTPOINT
loop0 4K squashfs loop /snap/bare/5
loop1 9.6M squashfs loop /snap/canonical-livepatch/235
loop2 9.6M squashfs loop /snap/canonical-livepatch/246
loop3 105.8M squashfs loop /snap/core/16202
loop4 105.8M squashfs loop /snap/core/16091
loop5 63.5M squashfs loop /snap/core20/2015
loop6 63.9M squashfs loop /snap/core20/2105
loop7 74.1M squashfs loop /snap/core22/1033
loop8 245.9M squashfs loop /snap/firefox/3600
loop9 73.9M squashfs loop /snap/core22/864
loop10 246M squashfs loop /snap/firefox/3626
loop11 349.7M squashfs loop /snap/gnome-3-38-2004/143
loop12 349.7M squashfs loop /snap/gnome-3-38-2004/140
loop13 496.9M squashfs loop /snap/gnome-42-2204/132
loop14 497M squashfs loop /snap/gnome-42-2204/141
loop15 81.3M squashfs loop /snap/gtk-common-themes/1534
loop16 45.9M squashfs loop /snap/snap-store/638
loop17 91.7M squashfs loop /snap/gtk-common-themes/1535
loop18 12.3M squashfs loop /snap/snap-store/959
loop19 40.4M squashfs loop /snap/snapd/20671
loop20 40.9M squashfs loop /snap/snapd/20290
loop21 452K squashfs loop /snap/snapd-desktop-integration/83
sda 3.6T zfs_member disk
├─sda1 128M part
└─sda2 3.6T ext4 part
sdb 3.6T zfs_member disk
├─sdb1 128M part
└─sdb2 3.6T part
sdc 3.6T zfs_member disk
├─sdc1 128M part
└─sdc2 3.6T part
nvme0n1 119.2G disk
├─nvme0n1p1 512M vfat part /boot/efi
└─nvme0n1p2 118.7G ext4 part /
cat /etc/mdadm/mdadm.conf
Output;
mdadm.conf
!NB! Run update-initramfs -u after updating this file.
!NB! This will ensure that initramfs has an uptodate copy.
Please refer to mdadm.conf(5) for information about this file.
by default (built-in), scan all partitions (/proc/partitions) and all
containers for MD superblocks. alternatively, specify devices to scan, using
wildcards if desired.
DEVICE partitions containers
automatically tag new arrays as belonging to the local system
HOMEHOST (system)
instruct the monitoring daemon where to send mail alerts
MAILADDR root
definitions of existing MD arrays
This configuration was auto-generated on Thu, 17 Mar 2022 16:19:20 +0530 by mkconf
ARRAY /dev/md/NAS:0 metadata=1.2 name=NAS:0 UUID=e1965c11:b3f7c3db:68417477:2663bfbf
sudo mount /dev/md/NAS:0 /mnt/md0
Output;
mount: /mnt/md0: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.(type) helper program.
sudo fsck.ext4 -v /dev/sda2
Output;
e2fsck 1.46.5 (30-Dec-2021)
ext2fs_check_desc: Corrupt group descriptor: bad block for inode bitmap
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Bad magic number in super-block while using the backup
blocksfsck.ext4: going back to original superblock
Superblock has an invalid journal (inode 8).
Clear? yes
*** journal has been deleted ***
The filesystem size (according to the superblock) is 1953443072 blocks
The physical size of the device is 976721408 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? yes
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
I have aborted above process. Should I continue?
sudo mdadm --detail --scan
– Terrance Jan 23 '24 at 18:52lsblk
output it looks like your array was / is ZFS and not mdadm. I should have caught that earlier. Maybe see https://askubuntu.com/a/123130/231142 – Terrance Jan 24 '24 at 03:07/dev/md/NAS:0
in the command that you used may be a bad name. The only time I have seen a:
used is like in a CIFS mount that is pointed to another system, but I have never seen a local device name with a:
in it. If also that thelsblk
listing partitions are correct, you would be assembling the RAID by partitions which would be using thesda2
,sdb2
andsdc2
partitions. In the how to guide you have there they started with blank drives, then partitioned after the RAID was created. – Terrance Jan 24 '24 at 14:22sudo mdadm --detail --scan
and see if it shows any output. The command is safe as it only scans for what was existing, and if the output is successful, the output is what is needed to be added to the/etc/mdadm/mdadm.conf
file to load the array. – Terrance Jan 24 '24 at 15:06. Group descriptor 32895 checksum is 0x0000, should be 0x102d. FIXED. Block bitmap for group 32896 is not in group. (block 11404697244042371) Relocate
/dev/sda2: ********** WARNING: Filesystem still has errors **********
– Extreme_N Jan 24 '24 at 15:29