I have a Raidz volume that I have noticed is running in degraded mode. It looks to me as if one of the drives has changed from /dev/sdf1
to /dev/sde1
because the machine only has 3 WD RED 3tb drives whcih the RAID was built upon and the disk manager shows them as sda, sdb, and sde as shown in the picture below:
Question
Is there a way I can fix the RAID array without having to wipe the /sde1 drive and rebuild the array which would take quite some time? To avoid this happening in future, do I need to avoid creating pools like so:
sudo zpool create -f [pool name] raidz /dev/sdb /dev/sdc /dev/sdd
and use UUIDs instead like this:
sudo zpool create -f [pool name] raidz \
"92e3fea4-66c7-4f59-9929-3a620f2bb24a" \
"92e3fea4-66c7-4f59-9929-3a620f2bb24b" \
"92e3fea4-66c7-4f59-9929-3a620f2bb24c"
Context
- Ubuntu 16.04 running native ZFS.
sudo zpool export [my pool name]
and thensudo zpool import -d [my pool name]
as covered here: https://askubuntu.com/questions/967091/zpool-degrades-when-plugging-in-a-drive – Programster Jan 01 '18 at 11:45