Adding disks under KVM: LVM based LV and partition resizing

With KVM gaining popularity, questions such as how to increase guest system disk space for partitions using LVM surface a lot more often than they used to. Indeed the process is somewhat more complex than for example under OVZ or XEN PV.

We are assuming you have a guest Linux system that is using LVM in between your disk and partitions, i.e. you have a volume group, say “vg_vps”, and several logical volumes such as “lv_root”, etc. in this volume group. We further assume you would like to increase the size of the root partition.

As a prerequisite, you will have to add a second disk file or logical volume to the configuration of your KVM guest, and then reboot the guest in such a way that the config file will be re-read (most often it is just good practice to shut down the guest from inside, and then restart it using the new config file.

Adding disks in first place can be achieved in several ways, depending on your setup. With proxmox or Solus it is usually just a simple click and go – there are options in both control panels to add additional disks to a VM. Proxmox will create a new diskfile, ideally qcow2, whereas Solus will normally create a new logical volume via LVM on the host node.

Once this has been done, you need to ssh or console into the guest and perform the following steps to enlarge a partition in your guest that is based on LVM:

First, check if your guest can see the new disk – if it is the second disk, it will often be /dev/sdb, or /dev/vdb. You can do this using fdisk:

root@vps [~]# fdisk /dev/vdb

Using “p” you can display the current partition table, which should be empty. Create a new primary partition, and use default settings (first partition, and use the entire disk, i.e. first and last block). Then, change the partition type (“t”, and then “8e”) to LVM, write the config via “w” and quit “q”.

You will now have a new partition /dev/sdb1 or /dev/vdb1 in your guest system, which you can use to grow your root partition, but before that, we need to add the new partition to the volume group the root partition is on, let’s have a look at the group first:

root@vps [~]# vgs

This will display the volume groups, whereas

root@vps [~]# lvs

will show the logical volumes. This will let you find out where your root partition is located. Next, we need to extend the volume group the root partition / logical volume is on. Assume our root partition / is in the logical volume “lv_root” that is on “vg_vps”, then we have to:

root@vps [~]# vgextend vg_vps /dev/vdb1

This will add the new disk/partition to our intended volume group “vg_vps”.

root@vps [~]# vgdisplay

… will now display the new parameters, including the number of free PE (physical extents). Now we can increase the size of the logical volume our root partition is on (assuming it is called “lv_root”):

root@vps [~]# lvextend -l +INT_PE /dev/vg_vps/lv_root

… where INT_PE is the number of free physical extents we got from the vgdisplay command.

We are almost done now: we just need to tell the guest that the root partition has increased in size, and this can be done live, i.e. without rebooting or unmounting under many Linux distributions, including CentOS:

root@vps [~]# resize2fs /dev/vg_vps/lv_root

Voilà! When you now do

root@vps [~]# df

you will find your root partition has been increased!

 

Migrating Proxmox KVM to Solus / CentOS KVM

By default, Proxmox creates KVM based VMs on a single disk partition, typically in raw or qcow2 format. Solus, however, uses an LVM based system. So how do you move things over from Proxmox to Solus? Here goes:

  1. Shut down the respective Proxmox VM;
  2. As an additional precaution, make a copy of the Proxmox VM (cp will do);
  3. If the Proxmox VM is not in raw format, you need to convert it using qemu-img:
    qemu-img convert PROXMOX_VM_FILE -O raw OUTPUT_FILE
    Proxmox usually stores the image files under /var/lib/vz/images/ID
  4. Create an empty KVM VM on the Solus node with a disk size at least as large as the raw file of the Proxmox VM (and possibly adjust settings such as driver, PAE, etc.), and keep it shut down;
  5. In the config file (usually under /home/kvm/kvmID) of the newly created Solus VM, check the following line:
    <source file=’/dev/VG_NAME/kvmID_img’/>
    and make a note;
  6. dd the Proxmox raw image over to the Solus node:
    dd if=PROXMOX_VM.raw | ssh [options] user@solus_node ‘dd of=/dev/VG_NAME/kvmID_img’
  7.  Boot the new Solus KVM VM;

Current virtualisation statistics

Out of pure interest we just collected a snapshot of our current distribution of vitualisation technologies among our client base, below you will find the results (they are not taking into account distortions caused by virtualisation technology available by location, though):

Virtualisation Percentage
OpenVZ 25.21%
XEN PV 40.60%
XEN HVM 14.10%
KVM 20.09%

Adding disks to Windows VMs under KVM

Reading through various posts on forums and blogs all over the web there are many solutions offered how to add another disk to a Windows VM running under KVM. Below is one solution that worked smoothly for all our nodes running the Solus control panel, with KVM as virtualisation technology:

  1. create a new volume with
    lvcreate -L [INTEGERSIZE]G -n [NEW_VOL_NAME] [VOLUMEGROUPNAME]
  2. edit the vm’s config file (under Solus, this is usually /home/kvm/kvmID/kvmID.xml), and a section below the first disk (assuming hda has already been assigned, we use hdb here for the new disk):
        <disk type='file' device='disk'>
         <source file='/dev/VOLUMEGROUPNAME/NEW_VOL_NAME'/>
         <target dev='hdb' bus='ide'/>
        </disk>
  3. shut down and then boot the vm
  4. log in, and in the storage section of your server administration tool, initialise and format the new disk

NB for Solus: you will have to create a hook and enable advanced config in the control panel, otherwise Solus will overwrite the edited config again. The most basic hook would just hold the production config in a separate file in the same directory, and the hook would ensure that the new file is being used, e.g. from ./hooks/hook_config.sh (must be executable):

#!/bin/sh
mv /home/kvm/kvmID/kvmID.xml /home/kvm/kvmID/kvmID.xml.dist
cp -f /home/kvm/kvmID/kvmID.xml.newdisk /home/kvm/kvmID/kvmID.xml