-1

I have been using a raid 5 array as a NAS for past two years that I have created with the help of this guide. But recently I had to swap the motherboard of this system because the previous one stopped working. I have 1 nvme and 3 hdds in this system. Nvme is used as the system drive while 3 hdds used as the raid storage. System is booting fine into OS (without a fresh install of Ubuntu) after the motherboard is replaced. But the issue is raid array is not working anymore.
I have no knowledge about Linux systems to recover this array. Things I already tried;

Managed to recover data by just rebuilding the array from scratch. I did a fresh install of OMV and wiped 3 hdds using gui. Then created a new raid 5 array and it started resyncing. After resyncing is done files are accessible again.

sudo mdadm --assemble /dev/md/NAS:0 /dev/sda /dev/sdb /dev/sdc Output;

mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: /dev/sda has no superblock - assembly aborted*

*This output is same to sda, sdb and sdc

cat /proc/mdstat

Output;

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: none

lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

Output;

NAME SIZE FSTYPE TYPE MOUNTPOINT
loop0 4K squashfs loop /snap/bare/5
loop1 9.6M squashfs loop /snap/canonical-livepatch/235
loop2 9.6M squashfs loop /snap/canonical-livepatch/246
loop3 105.8M squashfs loop /snap/core/16202
loop4 105.8M squashfs loop /snap/core/16091
loop5 63.5M squashfs loop /snap/core20/2015
loop6 63.9M squashfs loop /snap/core20/2105
loop7 74.1M squashfs loop /snap/core22/1033
loop8 245.9M squashfs loop /snap/firefox/3600
loop9 73.9M squashfs loop /snap/core22/864
loop10 246M squashfs loop /snap/firefox/3626
loop11 349.7M squashfs loop /snap/gnome-3-38-2004/143
loop12 349.7M squashfs loop /snap/gnome-3-38-2004/140
loop13 496.9M squashfs loop /snap/gnome-42-2204/132
loop14 497M squashfs loop /snap/gnome-42-2204/141
loop15 81.3M squashfs loop /snap/gtk-common-themes/1534
loop16 45.9M squashfs loop /snap/snap-store/638
loop17 91.7M squashfs loop /snap/gtk-common-themes/1535
loop18 12.3M squashfs loop /snap/snap-store/959
loop19 40.4M squashfs loop /snap/snapd/20671
loop20 40.9M squashfs loop /snap/snapd/20290
loop21 452K squashfs loop /snap/snapd-desktop-integration/83
sda 3.6T zfs_member disk
├─sda1 128M part
└─sda2 3.6T ext4 part
sdb 3.6T zfs_member disk
├─sdb1 128M part
└─sdb2 3.6T part
sdc 3.6T zfs_member disk
├─sdc1 128M part
└─sdc2 3.6T part
nvme0n1 119.2G disk
├─nvme0n1p1 512M vfat part /boot/efi
└─nvme0n1p2 118.7G ext4 part /

cat /etc/mdadm/mdadm.conf

Output;

mdadm.conf
!NB! Run update-initramfs -u after updating this file.
!NB! This will ensure that initramfs has an uptodate copy.

Please refer to mdadm.conf(5) for information about this file.

by default (built-in), scan all partitions (/proc/partitions) and all
containers for MD superblocks. alternatively, specify devices to scan, using
wildcards if desired.
DEVICE partitions containers

automatically tag new arrays as belonging to the local system
HOMEHOST (system)

instruct the monitoring daemon where to send mail alerts
MAILADDR root

definitions of existing MD arrays

This configuration was auto-generated on Thu, 17 Mar 2022 16:19:20 +0530 by mkconf
ARRAY /dev/md/NAS:0 metadata=1.2 name=NAS:0 UUID=e1965c11:b3f7c3db:68417477:2663bfbf

sudo mount /dev/md/NAS:0 /mnt/md0

Output;

mount: /mnt/md0: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.(type) helper program.

sudo fsck.ext4 -v /dev/sda2

Output;

e2fsck 1.46.5 (30-Dec-2021)
ext2fs_check_desc: Corrupt group descriptor: bad block for inode bitmap
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Bad magic number in super-block while using the backup
blocksfsck.ext4: going back to original superblock
Superblock has an invalid journal (inode 8).
Clear? yes
*** journal has been deleted ***
The filesystem size (according to the superblock) is 1953443072 blocks
The physical size of the device is 976721408 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? yes
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****

I have aborted above process. Should I continue?

  • Please edit your question and add the output of sudo mdadm --detail --scan – Terrance Jan 23 '24 at 18:52
  • @Terrance No output displayed for sudo mdadm --detail --scan – Extreme_N Jan 24 '24 at 01:39
  • Actually, looking at your lsblk output it looks like your array was / is ZFS and not mdadm. I should have caught that earlier. Maybe see https://askubuntu.com/a/123130/231142 – Terrance Jan 24 '24 at 03:07
  • @Terrance I saw that too. But I don't know how it changed to zfs share and I'm sure I have used the guide linked above to create the Raid 5 array. ZFS not even installed in the system. Running the command 'sudo zfs get all' gives the error 'zfs: command not found' and as I mentioned above raid was working fine until the motherboard replacement. – Extreme_N Jan 24 '24 at 13:03
  • You might have to clear out what you have tried. I'm not 100% sure, but it might be the name of /dev/md/NAS:0 in the command that you used may be a bad name. The only time I have seen a : used is like in a CIFS mount that is pointed to another system, but I have never seen a local device name with a : in it. If also that the lsblk listing partitions are correct, you would be assembling the RAID by partitions which would be using the sda2, sdb2 and sdc2 partitions. In the how to guide you have there they started with blank drives, then partitioned after the RAID was created. – Terrance Jan 24 '24 at 14:22
  • I would also look into recovering the superblocks on those drives. I decent guide for superblock recovery https://linuxexpresso.wordpress.com/2010/03/31/repair-a-broken-ext4-superblock-in-ubuntu/ Once a good superblock on the drive(s) is recovered the command I had you type in first should be able to see the existing RAID array that was on them. – Terrance Jan 24 '24 at 14:28
  • @Terrance Should I continue the process as per the last output I have added? – Extreme_N Jan 24 '24 at 15:04
  • You can try rebooting the system then run the command of sudo mdadm --detail --scan and see if it shows any output. The command is safe as it only scans for what was existing, and if the output is successful, the output is what is needed to be added to the /etc/mdadm/mdadm.conf file to load the array. – Terrance Jan 24 '24 at 15:06
  • @Terrance nope still no output. Should I continue with command sudo fsck.ext4 -v /dev/sda2 ? – Extreme_N Jan 24 '24 at 15:14
  • Sorry, yes, rerun the checks on the drive(s). – Terrance Jan 24 '24 at 15:18
  • @Terrance It just goes on
    . Group descriptor 32895 checksum is 0x0000, should be 0x102d. FIXED. Block bitmap for group 32896 is not in group. (block 11404697244042371) Relocate? cancelled! Inode bitmap for group 32896 is not in group. (block 17597146731742339) Relocate? cancelled! Inode table for group 32896 is not in group. (block 34204170357736579) WARNING: SEVERE DATA LOSS POSSIBLE. Relocate? cancelled! Group descriptor 32896 checksum is 0x0018, should be 0xac0c. FIXED.

    /dev/sda2: ********** WARNING: Filesystem still has errors **********

    – Extreme_N Jan 24 '24 at 15:29

1 Answers1

0

Managed to recover data by just rebuilding the array from scratch. I did a fresh install of OMV and wiped 3 hdds using gui. Then created a new raid 5 array and it started resyncing. After resyncing is done the array is mounting and files are accessible again.