Counter to popular believe, you don’t need shared storage for libvirt live migrations. It works just fine with both lvm as well as qcow2 backed VMs.
Let’s bootstrap libvirt on a fresh debian 10 (buster) machine, so we are on the same page:
# First set up ssh key based authentication for your user. This is left as an exercise to the reader. # Install virsh, libvirtd and qemu sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm # Allow your user to interact with the libvirt 'system' URI without further polkit authentication. sudo usermod -aG libvirt "$USER" newgrp libvirt # Default to using the 'system' URI for your user. mkdir -p ~/.config/libvirt/ echo 'uri_default = "qemu:///system"' > ~/.config/libvirt/libvirt.conf # Check out the libvirt FAQs question on "What is the difference between qemu:///system and qemu:///session?" for more information. # (Auto)start the default libvirt network, this already comes with libvirt on buster. virsh net-autostart default virsh net-start default # Create and (auto)start a default libvirt storage pool. virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images virsh pool-autostart default virsh pool-start default
Do the same on a second machine. For the sake of this demo both machines need to be connected to the same Ethernet.
Now create a virtual machine on the first hypervisor. One simple way would be to use virt-manager, but it doesn’t really matter. If you use virt-manager you can pretty much use all the default settings, except for networking. You need to change the network selection from NAT to macvtap + Bridge. Note that neither NAT nor macvtap + Bridge are very suitable for production systems though. 💁♀️
Now we should be able to migrate the running virtual machine from the first hypervisor to the second one! Run this from your local machine:
LIBVIRT_DEFAULT_URI="qemu+ssh://hv1/system?socket=/var/run/libvirt/libvirt-sock" virsh migrate --live --persistent --undefinesource --copy-storage-all --verbose vm1 'qemu+ssh://hv2/system?socket=/var/run/libvirt/libvirt-sock'
vm1 accordingly. I’m running this from my mac, that’s why I need to define the socket locations via
?socket=/var/run/libvirt/libvirt-sock. My virsh expects the
libvirt-sock to be in a different location.
In the past you _had_ to create the VMs disks on the destination hypervisor before starting the migration. This was fixed quite some time ago. Unfortunately I still sometimes run in permission issues like
error: Cannot access storage file '/var/lib/libvirt/images/vm1.qcow2' (as uid:64055, gid:64055): No such file or directory when starting the migration and I’m not sure why. You can pre-create the disks via something like
virsh vol-create-as default vm1.qcow2 16108814336 --format qcow2 on the destination hypervisor. Note that this also means libvirt will just overwrite disk images with the same name, as the one of the VM you are migrating, on the target hypervisor.
Once the migration is done, remember to manually remove the VMs disks from the source hypervisor via
virsh vol-delete vm1.qcow2 --pool default.