2

I have Ubuntu 16.04.4 LTS I have a set of identical HDD. I have created a raid 1 array but i can not seem to mount it.

cat /proc/mdstat
md126 : active raid1 sde[1] sdd[0]
      24412992 blocks super 1.2 [2/2] [UU]

mdadm --detail /dev/md126
/dev/md126:
        Version : 1.2
  Creation Time : Mon Apr 16 12:20:39 2018
     Raid Level : raid1
     Array Size : 24412992 (23.28 GiB 25.00 GB)
  Used Dev Size : 24412992 (23.28 GiB 25.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed May  9 23:08:47 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : kat1:raidarray3  (local to host kat1)
           UUID : 5da485b7:9aed668a:053cec83:88179e15
         Events : 21

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde

Upon initialization I issued the command:

sudo mkfs.ext4 -F /dev/md126

then

sudo mount /dev/md126 /mnt/raid_23G

with no error displayed , the raid array is not displayed on df -h

root@kat1:/mnt# sudo mount /dev/md126 /mnt/raid_23G
root@kat1:/mnt#
root@kat1:/mnt# sudo umount /dev/md126
umount: /dev/md126: not mounted
root@kat1:/mnt# fsck /dev/md126
fsck from util-linux 2.27.1
e2fsck 1.42.13 (17-May-2015)
/dev/md126: clean, 11/1525920 files, 139793/6103248 blocks

#ls /dev/md126* 
/dev/md126 

/mnt# ls -la 
total 28 
drwxr-xr-x 7 root root 4096 Apr 16 12:22 . 
drwxr-xr-x 23 root root 4096 Apr 15 17:52 .. 
drwxr-xr-x 2 root root 4096 Apr 16 12:15 backup_raid 
drwxr-xr-x 5 root root 4096 Apr 23 16:20 md0 
drwxr-xr-x 2 root root 4096 Apr 16 12:22 raid_23G 
drwxr-xr-x 2 root root 4096 Apr 16 12:22 raid_587G 
drwxr-xr-x 4 root root 4096 Apr 15 23:40 Store

so the directory does exist, and there is no /dev/md126p1

Thank you very much for your assistance, the output of the command asked is the following:

root@kat1-kvm:/# sudo blkid
/dev/sda1: UUID="0f62bf49-d7b4-444b-9229-093b902c4f35" TYPE="ext2" PARTUUID="7dcd7eef-01"
/dev/sda5: UUID="kwwylr-xhGI-lGu0-vEDf-n1RV-BBep-MAdn10" TYPE="LVM2_member" PARTUUID="7dcd7eef-05"
/dev/md127: UUID="ea8d8ae6-2dea-494a-9283-926f29209b77" TYPE="ext4"
/dev/sdb: UUID="d07f60aa-3e50-937d-2cb6-0265baf86362" UUID_SUB="9812be8a-845b-01b6-ac13-93d983f6ce60" LABEL="kat1-kvm:raidarray2" TYPE="linux_raid_member"
/dev/sdc: UUID="d07f60aa-3e50-937d-2cb6-0265baf86362" UUID_SUB="16b50f6a-fed4-1985-c66a-fa487e42a968" LABEL="kat1-kvm:raidarray2" TYPE="linux_raid_member"
/dev/sdd: UUID="5da485b7-9aed-668a-053c-ec8388179e15" UUID_SUB="ab6f9b08-5ee6-b974-fbf1-f0401f4d0ab6" LABEL="kat1-kvm:raidarray3" TYPE="linux_raid_member"
/dev/sde: UUID="5da485b7-9aed-668a-053c-ec8388179e15" UUID_SUB="56e87833-2c7b-dbf8-5b53-582fc6e6bde6" LABEL="kat1-kvm:raidarray3" TYPE="linux_raid_member"
/dev/sdf: UUID="c28036d1-57b2-62d9-6188-5187f0b3a099" UUID_SUB="8168786f-9528-3ae8-dd3c-3e24df3b275c" LABEL="kat1-kvm:raidarray" TYPE="linux_raid_member"
/dev/sdg: UUID="c28036d1-57b2-62d9-6188-5187f0b3a099" UUID_SUB="02a8529c-e62c-25e1-8654-358371cf5ede" LABEL="kat1-kvm:raidarray" TYPE="linux_raid_member"
/dev/sdh1: LABEL="Store" UUID="d3521cce-65f7-4914-8476-15a3058368da" TYPE="ext4" PARTLABEL="Store" PARTUUID="833db700-d7ea-4307-b57b-7c61c9772840"
/dev/md0: UUID="40110e88-2ed2-49f7-b5d0-8353a1feacd3" TYPE="ext4"
/dev/md126: UUID="904d39f9-6c1b-462d-a841-614c1ba8c9d8" TYPE="ext4"
/dev/mapper/kat1--kvm--vg-root: UUID="e5d8c7d5-04bd-4596-9e4f-73bc010151b9" TYPE="ext4"
/dev/mapper/kat1--kvm--vg-swap_1: UUID="b6069bbf-9f87-4c5c-9034-838c37af0290" TYPE="swap"

in order to be clear, this server has multiple disks, 3 set of identical disks (for three raid 1 groups) and 2 single ones. The first raid group is working fine and accessible as can be seen from previous post info (md0) the other two refuse to mount, although mdstat say raid is fine.

I am lost.

I did follow your link, and it provided a command

sudo update-initramfs -u

which actually solved the problem, everything is now mounted properly I have no idea what just did but I am so gratefull for you taking the time. I am at your debt

Lazaros
  • 21
  • The folder of /mnt/raid_23G does not exist, so the md126 could not be mounted there. Why it didn't give you the error I am not sure. Can you do a ls /dev/md126* and give the output? There might be a chance that there is a partition setup and it looks like /dev/md126p1 for what should be mounted. – Terrance May 09 '18 at 22:15
  • Can you also add the output of sudo blkid so we can see if there is a partition with a UUID that can be mounted. – Terrance May 10 '18 at 05:11

1 Answers1

0

I will try my best to help on this answer here. I have kind of a similar setup, but I have a 2x500GB RAID 1 for boot and OS. Then I have a 5x4TB RAID 5 configuration.

After my arrays were created, I found that I didn't like the names of the /dev/md* that they were showing up as, so I changed them. I don't know if this is really going to make much difference, but it might.

In the /etc/mdadm/mdadm.conf file I found the line for my RAID5 configuration. It looked like this:

ARRAY /dev/md/1  metadata=1.2 UUID=3bb988cb:d5270497:36e75f46:67a9bc65 name=Intrepid:1

Yours will have the UUID=5da485b7:9aed668a:053cec83:88179e15 in the line.

But my naming convention was different. So I changed the line to this instead:

ARRAY /dev/md1  metadata=1.2 UUID=3bb988cb:d5270497:36e75f46:67a9bc65

After the change I ran

sudo update-initramfs -u

You might want to see this answer as well because you also have a /dev/md127 showing up: Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?

Then rebooted the host. After the reboot, the cat /proc/mdstat now looks like this:

md1 : active raid5 sdi1[5] sdh1[3] sdg1[2] sdf1[1] sde1[0]
      15627542528 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

Then I ran the sudo blkid and I got this for that /dev/md1

Mine actually had a partition created on it so it showed up like this:

/dev/md1p1: UUID="a50fd553-b143-4ad8-bdb1-2247d9349e86" TYPE="ext4" PARTUUID="19a0b3cb-03f0-4a7c-b562-3537b3046365"

I wanted that one mounted to my /media/storage mount so I put it in my /etc/fstab file as this line:

UUID=a50fd553-b143-4ad8-bdb1-2247d9349e86 /media/storage   ext4    defaults 0 0

As you can see the UUID matches in the fstab as well as the blkid. After it was added and saved, then all I had to run to have it mount from here without a reboot was:

sudo mount -a

Hope this helps!

Terrance
  • 41,612
  • 7
  • 124
  • 183