If anyone could offer any advice it would be much appreciated.
I am running Ubuntu Server 18.04.2 LTS server with 9 x 3TB WD Red hard drives in a RAID 5 configuration, using MDADM.
Is there any reason why I am seeing a difference in total RAID size capacity? I have attached some screen grabs with hopefully the correct information. If im missing anything please let me know.
Many Thanks.....
*df -T /srv/RAID
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/md127 ext4 20428278764 16279337724 4148924656 80% /srv/RAID*
*/dev/md127:
Version : 1.2
Creation Time : Mon Jul 23 20:38:24 2018
Raid Level : raid5
Array Size : 23441080320 (22355.16 GiB 24003.67 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 9
Total Devices : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Mar 10 10:31:32 2019
State : clean
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : kenny:RAID (local to host kenny)
UUID : d293081d:91d806d2:d70f301d:a4023e4a
Events : 45623
Number Major Minor RaidDevice State
0 8 144 0 active sync /dev/sdj
1 8 128 1 active sync /dev/sdi
2 8 112 2 active sync /dev/sdh
3 8 96 3 active sync /dev/sdg
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
6 8 16 6 active sync /dev/sdb
7 8 0 7 active sync /dev/sda
8 8 64 8 active sync /dev/sde*
mdadm --detail /dev/md127
anddf -T /srv/RAID
. – Thomas Mar 10 '19 at 10:25