1

I've been struggling for hours on how to successfully mount my Raid partition. I'm using Ubuntu 18.04 and have created a 2.0TB Raid-10 array (using 4 x 1.0TB SSDs) as shown here. enter image description here

I do not have deep technical knowledge but why cannot I not access the remaining volume.

If I run sudo lvscan I get:

  ACTIVE            '/dev/vg/swap' [<14.90 GiB] inherit
  ACTIVE            '/dev/vg/root' [32.59 GiB] inherit
  ACTIVE            '/dev/vg/temp' [9.31 GiB] inherit
  ACTIVE            '/dev/vg/var' [<4.66 GiB] inherit
  ACTIVE            '/dev/vg/home' [186.26 GiB] inherit

Further, df -h shows:

Filesystem           Size  Used Avail Use% Mounted on
udev                  16G     0   16G   0% /dev
tmpfs                3.2G  2.3M  3.2G   1% /run
/dev/mapper/vg-root   32G  5.1G   26G  17% /
tmpfs                 16G  252M   16G   2% /dev/shm
tmpfs                5.0M  4.0K  5.0M   1% /run/lock
tmpfs                 16G     0   16G   0% /sys/fs/cgroup
/dev/nvme2n1p1       487M  6.1M  480M   2% /boot/efi
/dev/mapper/vg-home  183G  1.6G  172G   1% /home
/dev/mapper/vg-var   4.6G  1.9G  2.5G  43% /var
/dev/mapper/vg-temp  9.2G   96M  8.6G   2% /tmp
tmpfs                3.2G   88K  3.2G   1% /run/user/1000
/dev/loop0            54M   54M     0 100% /snap/core18/719
/dev/loop1            91M   91M     0 100% /snap/core/6405
/dev/loop2            35M   35M     0 100% /snap/gtk-common-themes/1122
/dev/loop3           170M  170M     0 100% /snap/gimp/113
/dev/loop4           147M  147M     0 100% /snap/chromium/595

And sudo vgscan shows

 Reading volume groups from cache.
  Found volume group "vg" using metadata type lvm2

Running sudo pvscan I see the below which is exactly what I would like to access

     PV /dev/md0   VG vg         lvm2 [<1.82 TiB / <1.58 TiB free]
  Total: 1 [<1.82 TiB] / in use: 1 [<1.82 TiB] / in no VG: 0 [0   ]

Any ideas what I have done wrong here, appears I only have access to about <250gb of storage. Here is how the individual drives have been partitioned: enter image description here

I've just noticed that there is only one SSD that is mounted at /boot/efi, while the others are like the image directly above.

enter image description here

My /etc/fstab file looks as such: enter image description here

EDIT: out for sudo vgdisplay is: enter image description here

  • 1
    Since you put your RAID /dev/md0 under control of LVM, you wouldn't mount the RAID directly but rather manage available space through LVM and mount the resulting logical volumes like it is done with /, /home, /tmp ... already. Using vgdisplay should reveal how much space is left in the volume group. And depending on what you actually want to achieve, you can create a new logical volume that can be mounted then or grow one or more of the existing ones. – Thomas Feb 16 '19 at 13:24
  • Please show the output of sudo vgdisplay. – Michael Hampton Feb 16 '19 at 15:15
  • See above @Thomas – Darthtrader Feb 17 '19 at 02:01
  • @Thomas I just want to use the remaining file space to install applications separate to the OS as well as store large data files. Plain vanilla stuff. – Darthtrader Feb 17 '19 at 02:19
  • Easiest option would be to just grow the home LV using gparted Live boot environment. – Thomas Feb 17 '19 at 12:35
  • @Thomas do you have any reference material on how this can be implemented? – Darthtrader Feb 18 '19 at 06:11
  • 1
    maybe this helps? – Thomas Feb 18 '19 at 06:57
  • @Thomas helped me solve it. For future reference for others I found this helpful and easy - https://www.tecmint.com/extend-and-reduce-lvms-in-linux/ – Darthtrader Feb 21 '19 at 11:34

0 Answers0