2

I've Googled until my fingers ache. For some reason, I can't seem to grow my LV after adding an additional (identical) disk to my VG.

I think I've successfully added my fourth disk into the array. Every disk is the same WD 10TB drive (formats down to 9.1TB). My original array is 3 disks. So currently I have 18.2TB of available space. The disks are /dev/sdc, /dev/sdd, /dev/sde and I am adding /dev/sdf.

When I do a vgdisplay, the result us as follows:

--- Volume group ---
  VG Name               data
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  30
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               36.38 TiB
  PE Size               4.00 MiB
  Total PE              9537532
  Alloc PE / Size       7153149 / <27.29 TiB
  Free  PE / Size       2384383 / <9.10 TiB
  VG UUID               4eex8c-TZo3-P7N0-pp8m-jQ9c-uTf2-lZe1XN

The way I read this is that the number of LV's is 1 (which is correct), and the number of PV's is 4 (also correct). As I read this, the total VG size is 36.4TB or 4 x 9.1TB (also correct), and the available PE is ~27.3 TB (also correct).

But this is where everything falls down ...

I did a

lvextend -l +100%FREE /dev/data/data

and

lvextend --extents +100%FREE /dev/data/data

Both commands returns the following:

  Using stripesize of last segment 64.00 KiB
  Size of logical volume data/data unchanged from 18.19 TiB (4768764 extents).
  Logical volume data/data successfully resized.

and when I do a

resize2fs /dev/data/data

it returns

resize2fs 1.45.5 (07-Jan-2020)
The filesystem is already 4883214336 (4k) blocks long.  Nothing to do!

When I do a

df -h

I get

/dev/mapper/data-data                 19T   15T  2.7T  85% /mnt/sdc

Finally, a

lsblk

gives:

sdc                             8:32   0   9.1T  0 disk
├─data-data_rmeta_2           253:6    0     4M  0 lvm
│ └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
└─data-data_rimage_2          253:7    0   9.1T  0 lvm
  └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
sdd                             8:48   0   9.1T  0 disk
├─data-data_rmeta_1           253:4    0     4M  0 lvm
│ └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
└─data-data_rimage_1          253:5    0   9.1T  0 lvm
  └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
sde                             8:64   0   9.1T  0 disk
├─data-data_rmeta_0           253:2    0     4M  0 lvm
│ └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
└─data-data_rimage_0          253:3    0   9.1T  0 lvm
  └─data-data                 253:8    0  18.2T  0 lvm   /mnt/sdc
sdf                             8:80   0   9.1T  0 disk

So I'm stumped. Can anyone help me please? I'm quickly running out of space.

Thanks a lot in advance!!

b

Ben Woo
  • 21

1 Answers1

0

I had a similar problem, manged to resolve it by changing the allocation policy (mine was set to contiguous instead of the default normal)

lvextend /dev/UPLOADS/data /dev/sdd1 --alloc normal

Tohmaxxx
  • 101
  • I tried your solution (which makes a lot of sense), but got the following error:
      Insufficient free space: 3576573 extents needed, but only 2384383 available```
    
    – Ben Woo Apr 08 '21 at 16:22
  • So I tried lvextend -l 2384383 /dev/data/data, and the result was: Rounding size <9.10 TiB (2384383 extents) up to stripe boundary size <9.10 TiB(2384384 extents). New size given (2384384 extents) not larger than existing size (4768764 extents) and when I do a resize2fs /dev/data/data, I get: `resize2fs 1.45.5 (07-Jan-2020) The filesystem is already 4883214336 (4k) blocks long. Nothing to do!~

    :-(

    – Ben Woo Apr 08 '21 at 16:30