Extending (root) volume on LVM on Pure FlashArray

My earlier post Online resizing of Oracle volumes on Pure FlashArray was focused on resizing Pure FlashArray volumes that were mounted directly or through Oracle ASM.  What if the volume is part of LVM and you wanted to extend the size of a logical volume?

In this post I will go over how simple it is to extend the Pure volume that is part of a LVM.
There are lot of sites in the web that talks about LVM in Linux and I wouldn’t get into that in this post but will focus on extending a logical volume that is part of a LVM.

The common use case is to extend the logical volume in a root volume to extend the space for /home or swap as the default installation of Linux (Red Hat or Oracle Linux) creates the root volume with 50GB, swap with 4GB and the rest for /home when the underlying volume is at least 50GB.

I recently installed Oracle Linux on a bare metal server with SAN boot.  I had used a Pure FlashArray volume of 200GB to host the boot LUN.  As part of the installation Oracle Linux had created 50GB of root volume, 4 GB of swap space and the remaining 145GB was allocated towards /home.

 

LVM Details

[root@oranode1 ~]# pvdisplay
 --- Physical volume ---
 PV Name /dev/mapper/boot_lun_node1p2
 VG Name ol
 PV Size 199.51 GiB / not usable 0
 Allocatable yes
 PE Size 4.00 MiB
 Total PE 51073
 Free PE 15
 Allocated PE 51058
 PV UUID GOlYxE-SBRV-fWKs-r7ri-0Y0t-0Zbk-CgDcAz

[root@oranode1 ~]# vgdisplay
 --- Volume group ---
 VG Name ol
 System ID
 Format lvm2
 Metadata Areas 1
 Metadata Sequence No 4
 VG Access read/write
 VG Status resizable
 MAX LV 0
 Cur LV 3
 Open LV 3
 Max PV 0
 Cur PV 1
 Act PV 1
 VG Size 199.50 GiB
 PE Size 4.00 MiB
 Total PE 51073
 Alloc PE / Size 51058 / 199.45 GiB
 Free PE / Size 15 / 60.00 MiB
 VG UUID OC99UO-1AJZ-kgLt-cEUL-Ssdo-BW9p-gYo3Au

[root@oranode1 ~]# lvdisplay
 --- Logical volume ---
 LV Path /dev/ol/swap
 LV Name swap
 VG Name ol
 LV UUID hRFcUs-EOlO-cLxC-CGiq-A0HZ-C4UJ-uVPW9O
 LV Write Access read/write
 LV Creation host, time oranode1, 2017-01-04 15:56:08 -0800
 LV Status available
 # open 2
 LV Size 4.00 GiB
 Current LE 1024
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 8192
 Block device 252:4

 --- Logical volume ---
 LV Path /dev/ol/home
 LV Name home
 VG Name ol
 LV UUID O6xNfE-miDJ-HB3T-G15u-sdco-YdYB-2JljMs
 LV Write Access read/write
 LV Creation host, time oranode1, 2017-01-04 15:56:08 -0800
 LV Status available
 # open 0
 LV Size 145.45 GiB
 Current LE 37234
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 8192
 Block device 252:5

 --- Logical volume ---
 LV Path /dev/ol/root
 LV Name root
 VG Name ol
 LV UUID eLihog-m02B-Jg74-pA6x-8Qlu-gqdY-jqCQlt
 LV Write Access read/write
 LV Creation host, time oranode1, 2017-01-04 15:56:09 -0800
 LV Status available
 # open 1
 LV Size 50.00 GiB
 Current LE 12800
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 8192
 Block device 252:3

I realized I wanted to increase the size of the swap volume to 20GB instead of 4GB.  There are various ways the swap size can be increased like allocating space from different filesystem and mark it as swap or add a new volume altogether and setup swap.  I chose to extend the logical volume (/dev/ol/swap) from 4GB to 20GB but as you can see, there is no space left in the physical volume to be allocated.

Here is the high level sequence to add space and extend the logical volume.

  1. Extend the pure volume (through GUI or CLI)
  2. Rescan the device on the Linux host to reflect the new size
  3. Add a new partition to the boot lun (in this instance as partition 3)
  4. Create physical volume out of the new partition
  5. Extend the Volume group with the new physical volume
  6. Extend the logical volume

Here is how the updated logical volume will look like pictorially.

What is the benefit with this approach?

One could argue that they can create a new Pure volume altogether, attach to the host, create a physical volume out of it and add it to the volume group, which would work perfectly fine but it means a new change to the operating environment with a new volume.  If you had lot of scripts to manage your operations (be it backup or cloning ) and your existing volume was not part of a protection group, now you have to modify multiple places to reflect the new volume which makes up a logical grouping.  Rather this would make it easier to perform the change without impacting any of these scripts.

Detailed steps

  1. Edit the Pure volume in Pure GUI and increase it from 200GB to 250GB.  You can also do the same through CLI.
    purevol setattr --size 250G fs_bootvol
  2. Rescan the scsi devices with -s option on the Linux host to reflect the new size.
    [root@oranode1 ~]# rescan-scsi-bus.sh -s
    Scanning SCSI subsystem for new devices
    Searching for resized LUNs
    RESIZED: Host: scsi0 Channel: 00 Id: 04 Lun: 01
     Vendor: PURE Model: FlashArray Rev: 483
     Type: Direct-Access ANSI SCSI revision: 06
    RESIZED: Host: scsi0 Channel: 00 Id: 05 Lun: 01
     Vendor: PURE Model: FlashArray Rev: 483
     Type: Direct-Access ANSI SCSI revision: 06
    RESIZED: Host: scsi1 Channel: 00 Id: 04 Lun: 01
     Vendor: PURE Model: FlashArray Rev: 483
     Type: Direct-Access ANSI SCSI revision: 06
    RESIZED: Host: scsi1 Channel: 00 Id: 05 Lun: 01
     Vendor: PURE Model: FlashArray Rev: 483
     Type: Direct-Access ANSI SCSI revision: 06
    0 new or changed device(s) found.
    4 remapped or resized device(s) found.
     [0:0:4:1]
     [0:0:5:1]
     [1:0:4:1]
     [1:0:5:1]
    0 device(s) removed.
    
    [root@oranode1 ~]# multipath -ll
    boot_lun_node1 (3624a93704b53f8cc7922442b000110a6) dm-0 PURE ,FlashArray
    size=200G features='0' hwhandler='0' wp=rw
    `-+- policy='queue-length 0' prio=1 status=active
     |- 0:0:4:1 sda 8:0 active ready running
     |- 0:0:5:1 sdb 8:16 active ready running
     |- 1:0:4:1 sdc 8:32 active ready running
     `- 1:0:5:1 sdd 8:48 active ready running
  3. Multipath command still shows 200G as the size.  To reflect the updated size, run the following command.  As you can see the new size is now reflected at the OS level.Note: boot_lun_nod1 is the alias for the boot lun that is on Pure.
    [root@oranode1 ~]# multipathd -k'resize map boot_lun_node1'
    ok
    [root@oranode1 ~]# multipath -ll
    Jan 04 17:52:16 | zram0: No fc_remote_port device for 'rport--1:-1-0'
    boot_lun_node1 (3624a93704b53f8cc7922442b000110a6) dm-0 PURE ,FlashArray
    size=250G features='0' hwhandler='0' wp=rw
    `-+- policy='queue-length 0' prio=1 status=active
     |- 0:0:4:1 sda 8:0 active ready running
     |- 0:0:5:1 sdb 8:16 active ready running
     |- 1:0:4:1 sdc 8:32 active ready running
     `- 1:0:5:1 sdd 8:48 active ready running
  4. Create a new partition, in this instance the 3rd partition on the boot lun as type 8e (Linux LVM) using fdisk.
    [root@oranode1 ~]# fdisk /dev/mapper/boot_lun_node1
    Welcome to fdisk (util-linux 2.23.2).
    
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    Command (m for help): n
    Partition type:
     p primary (2 primary, 0 extended, 2 free)
     e extended
    Select (default p): p
    Partition number (3,4, default 3):
    First sector (419430400-524287999, default 419430400):
    Using default value 419430400
    Last sector, +sectors or +size{K,M,G} (419430400-524287999, default 524287999):
    Using default value 524287999
    Partition 3 of type Linux and of size 50 GiB is set
    
    Command (m for help): t
    Partition number (1-3, default 3): 3
    Hex code (type L to list all codes): L
    
     0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
     1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
     2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
     3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
     4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
     5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
     6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
     7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
     8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
     9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
     a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
     b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
     c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs
     e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT
     f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/
    10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b
    11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor
    12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor
    14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary
    16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS
    17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE
    18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto
    1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep
    1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT
    1e Hidden W95 FAT1 80 Old Minix
    Hex code (type L to list all codes): 8e
    Changed type of partition 'Linux' to 'Linux LVM'
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    
    WARNING: Re-reading the partition table failed with error 22: Invalid argument.
    The kernel still uses the old table. The new table will be used at
    the next reboot or after you run partprobe(8) or kpartx(8)
    Syncing disks.
    
    [root@oranode1 ~]# ls -ltr /dev/mapper/boot*
    lrwxrwxrwx 1 root root 7 Jan 4 17:52 /dev/mapper/boot_lun_node1p2 -> ../dm-2
    lrwxrwxrwx 1 root root 7 Jan 4 17:52 /dev/mapper/boot_lun_node1p1 -> ../dm-1
    lrwxrwxrwx 1 root root 7 Jan 4 17:54 /dev/mapper/boot_lun_node1 -> ../dm-0
  5. As reported by fdisk, the partition table doesn’t reflect the 3rd partition. Run kpartx command to re-read the partition table.
    [root@oranode1 ~]# kpartx -a /dev/mapper/boot_lun_node1
    [root@oranode1 ~]# ls -ltr /dev/mapper/boot*
    lrwxrwxrwx 1 root root 7 Jan 4 17:54 /dev/mapper/boot_lun_node1 -> ../dm-0
    lrwxrwxrwx 1 root root 7 Jan 4 17:56 /dev/mapper/boot_lun_node1p2 -> ../dm-2
    lrwxrwxrwx 1 root root 7 Jan 4 17:56 /dev/mapper/boot_lun_node1p1 -> ../dm-1
    lrwxrwxrwx 1 root root 7 Jan 4 17:56 /dev/mapper/boot_lun_node1p3 -> ../dm-6
  6. Now create the physical volume using the 3rd partition that we created.
    [root@oranode1 ~]# pvcreate /dev/mapper/boot_lun_node1p3
      Physical volume "/dev/mapper/boot_lun_node1p3" successfully created
    
    [root@oranode1 ~]# pvdisplay
     --- Physical volume ---
     PV Name /dev/mapper/boot_lun_node1p2
     VG Name ol
     PV Size 199.51 GiB / not usable 0
     Allocatable yes
     PE Size 4.00 MiB
     Total PE 51073
     Free PE 15
     Allocated PE 51058
     PV UUID GOlYxE-SBRV-fWKs-r7ri-0Y0t-0Zbk-CgDcAz
    
     --- Physical volume ---
     PV Name /dev/mapper/boot_lun_node1p3
     VG Name ol
     PV Size 50.00 GiB / not usable 0
     Allocatable yes
     PE Size 4.00 MiB
     Total PE 12799
     Free PE 12799
     Allocated PE 0
     PV UUID 5u4WNT-bljS-cUq3-18xP-yMzT-fxVD-9kV58d
    
  7. Extend the volume group with the physical volume that we just created.  As you can see vgdisplay shows the free space available as 50GB.
    [root@oranode1 ~]# vgextend ol /dev/mapper/boot_lun_node1p3
     Volume group "ol" successfully extended
    
    [root@oranode1 ~]# vgdisplay
     --- Volume group ---
     VG Name ol
     System ID
     Format lvm2
     Metadata Areas 2
     Metadata Sequence No 5
     VG Access read/write
     VG Status resizable
     MAX LV 0
     Cur LV 3
     Open LV 3
     Max PV 0
     Cur PV 2
     Act PV 2
     VG Size 249.50 GiB
     PE Size 4.00 MiB
     Total PE 63872
     Alloc PE / Size 51058 / 199.45 GiB
     Free PE / Size 12814 / 50.05 GiB
     VG UUID OC99UO-1AJZ-kgLt-cEUL-Ssdo-BW9p-gYo3Au
  8. Now we can extend the logical volume by 16G (to get to the total of 20GB) as the underlying volume group has free space available.
    [root@oranode1 ~]# lvextend -L +16G /dev/ol/swap
     Size of logical volume ol/swap changed from 4.00 GiB (1024 extents) to 20.00 GiB (5120 extents).
     Logical volume swap successfully resized.
    
    [root@oranode1 ~]# lvdisplay
     --- Logical volume ---
     LV Path /dev/ol/swap
     LV Name swap
     VG Name ol
     LV UUID hRFcUs-EOlO-cLxC-CGiq-A0HZ-C4UJ-uVPW9O
     LV Write Access read/write
     LV Creation host, time oranode1, 2017-01-04 15:56:08 -0800
     LV Status available
     # open 2
     LV Size 20.00 GiB
     Current LE 5120
     Segments 3
     Allocation inherit
     Read ahead sectors auto
     - currently set to 8192
     Block device 252:4
    
     --- Logical volume ---
     LV Path /dev/ol/home
     LV Name home
     VG Name ol
     LV UUID O6xNfE-miDJ-HB3T-G15u-sdco-YdYB-2JljMs
     LV Write Access read/write
     LV Creation host, time oranode1, 2017-01-04 15:56:08 -0800
     LV Status available
     # open 1
     LV Size 145.45 GiB
     Current LE 37234
     Segments 1
     Allocation inherit
     Read ahead sectors auto
     - currently set to 8192
     Block device 252:5
    
     --- Logical volume ---
     LV Path /dev/ol/root
     LV Name root
     VG Name ol
     LV UUID eLihog-m02B-Jg74-pA6x-8Qlu-gqdY-jqCQlt
     LV Write Access read/write
     LV Creation host, time oranode1, 2017-01-04 15:56:09 -0800
     LV Status available
     # open 1
     LV Size 50.00 GiB
     Current LE 12800
     Segments 1
     Allocation inherit
     Read ahead sectors auto
     - currently set to 8192
     Block device 252:3
  9. In this example, we extended the swap space and to reflect the swap space on the system, perform the following.  If this is on any other logical volume that is mounted, use the appropriate command (like xfs_growfs or resize2fs) to extend the filesystem.
    [root@oranode1 ~]# swapoff -v /dev/ol/swap
    swapoff /dev/ol/swap
    
    [root@oranode1 ~]# mkswap /dev/ol/swap
    mkswap: /dev/ol/swap: warning: wiping old swap signature.
    Setting up swapspace version 1, size = 20971516 KiB
    no label, UUID=92439b82-9660-4d05-b7e7-77c74b53a1e8
    
    [root@oranode1 ~]# swapon -va
    swapon /dev/mapper/ol-swap
    swapon: /dev/mapper/ol-swap: found swap signature: version 1, page-size 4, same byte order
    swapon: /dev/mapper/ol-swap: pagesize=4096, swapsize=21474836480, devsize=21474836480
    [root@oranode1 ~]# cat /proc/swaps
    Filename Type Size Used Priority
    /dev/dm-4 partition 20971516 0 -1
    
    [root@oranode1 ~]# grep SwapTotal /proc/meminfo
    SwapTotal: 20971516 kB

As you can see, Pure Storage has simplified the step 1 (resizing) significantly.  It is not about just resizing but in general Pure Storage has simplified various aspects of storage that Storage Administrators have lot more time in their hand to do other activities including keeping up with the trend.

Like it? Share ...Tweet about this on TwitterShare on LinkedIn

2 comments

  1. Before being able to attempt to shrink the size of an LVM volume, you must first run a file system check on it. If you don t do this, you will get an error message and will not be able to proceed.

Leave a Reply

Your email address will not be published. Required fields are marked *