Extending an existing LVM on a virtual machine

Quick and dirty steps, this  work on all hypervisor.

1a. Add more space to the existing disk. On VMware, if the drive is thin provisioned, grow it to the new additional space.
1b. If the drive is thick, or you don’t know how to grow the disk. Add an additional disk drive.
2a. I like to move to runlevel one where nothing should be running. Use lsblk to find where the disk is mounted, dismount the directory(ies) and run fdisk against the physical disk drive (fdisk /dev/sdb for instance)
2b. If I had to add a drive we’ll need to scan for the drive, use:
/usr/bin/rescan-scsi-bus.sh or echo “- – -” > /sys/class/scsi_host/hostX/scan
Once you have that, you’ll be able to view the disk with fdisk. Use lsblk to ID the new device and use fdisk /dev/sdX to create a new partition on it.
3a. I know this sounds crazy, but delete the partition on the disk containing the mount point you want to extend. Now recreate the partition using the default values to use up all the new space. Set the partition type to 8e for LVM. Write the changes.
3b. With our new drive we need to give it a partition, select creating a new partition in fdisk. Set the format type to LVM with 8e. Write the changes
4a. Use pvextend to grow the disk. Use pvextend /dev/sdX# to grow the partition
4b. Add this new disk to the volume group with vgextend vgname /dev/sdX#
5. Finally, grow the logical volume with lvextend -r -l +100%FREE /dev/mapper/vgname

In the final command, -r will execute resize2fs for you automatically. Don’t forget the plus sign in front of the 100% otherwise it will not grow the partition even if the command executes successfully.

scanning for scsi devices in CentOS 6

 

 

 

Recently I ran into a situation where passing the “- – -” values to iscsi_host/host#/scan didn’t work for me. I was sure that I had run this before on CentOS6, but to no avail I could not find a value in the /host portion.

 

So, this what I discovered! The friendly folks at Red Hat have created a script for dealing with this.

 

Do this:

 

1. yum install sg3_utils
2. run rescan-scsi-bus.sh (they added it to the path 😉 booyah!)

 

Now you should have your drive all discovered and ready for fdisk!

 

Here’s the original link for this info: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/rescan-scsi-bus.html

 

Accessing LVMs from CentOS Rescue

Recently, I had the fun (pain) of recovering a CentOS box with a corrupted logical volume. Below are the steps I took.

1. Boot to a CD/DVD and enter rescue mode from the menu
2. When the menu option for automatically mounting is offered, select no
3. Create a space to mount the disk. If you’re a purest, /mnt/sysimage is fine
4. Next, key in “lvm vgscan -v”
5. Now that we have the disk, we can tell the kernel about the vg, we do this with “lvm vgchange -a y”
6. You should have received a message about the vg being active now, great now we can mount the logical volumes like you would with a physical drive. Use lvm lvs –all to list all the available logical volumes.

in my case, my lv_home became corrupt. Not sure what happened, but we’ll take a look at logs later.

To remove the lv, we’ll do:

lvremove /dev/mapper/vg_name_lv_home

CentOS will prompt us if we’re sure we want to do this. You can use -an to automatically respond yes if you find yourself scripting this.

Now, we’ll recreate the logic volume.

lvcreate -L <size> -n lv_home vg_name

Don’t forget to format the new lv!

mkfs.ext4 /dev/mapper/vg_name_lv_home

Now you can mount the drive to /mnt/sysimage/home

or just reboot.