3

I have a Raidz volume that I have noticed is running in degraded mode. It looks to me as if one of the drives has changed from /dev/sdf1 to /dev/sde1 because the machine only has 3 WD RED 3tb drives whcih the RAID was built upon and the disk manager shows them as sda, sdb, and sde as shown in the picture below:

enter image description here

Question

Is there a way I can fix the RAID array without having to wipe the /sde1 drive and rebuild the array which would take quite some time? To avoid this happening in future, do I need to avoid creating pools like so:

sudo zpool create -f [pool name] raidz /dev/sdb /dev/sdc /dev/sdd 

and use UUIDs instead like this:

sudo zpool create -f [pool name] raidz \
"92e3fea4-66c7-4f59-9929-3a620f2bb24a" \
"92e3fea4-66c7-4f59-9929-3a620f2bb24b" \
"92e3fea4-66c7-4f59-9929-3a620f2bb24c" 

Context

  • Ubuntu 16.04 running native ZFS.
Programster
  • 5,871

1 Answers1

3

You should only ever create pools by using

/dev/disk/by-uuid/92e3fea4-66c7-4f59-9929-3a620f2bb24c

or similar like

/deb/disk/by-id

do the following to get the present mappings

ls -l /dev/disk/by-uuid

or

ls -l /deb/disk/by-id

I prefer by-id, but always make 100% sure you are using the correct disk. Don't just blindly look at where the disk is mapped to. Using by-id I've had stale entries that map to the same device. Double, triple check and confirm.

  • 1
    Just some additional info, one can fix their pool by having it swith to uuid from sda, sdb etc by running the commands sudo zpool export [my pool name] and then sudo zpool import -d [my pool name] as covered here: https://askubuntu.com/questions/967091/zpool-degrades-when-plugging-in-a-drive – Programster Jan 01 '18 at 11:45