Virsh, Libvirt, etc. ====================================================================== (July 2017) `virsh` is the primary linux CLI tool for dealing with virtual machines. What is `virsh`? How does it fit into linux virtualization? - KVM is a linux kernel module that provides access to low-level hardware features in the CPU that accelerate virtualization. - QEMU is a type-2 virtualization hypervisor (and stand-alone emulator) that can use KVM to act as a type-1 hypervisor. - While KVM in effect provides the CPU, QEMU brings disk, network, video, PCI, USB, serial, etc. - Libvirt is a collection of software (including an API, the `libvirtd` daemon, and the `virsh` CLI utility) for managing virtual machines (and storage pools and some network stuff). - Libvirt provides a consistent interface to multiple hypervisors, including KVM, Xen, and ESX. - The `virsh` command-line utility is one way of interacting with libvirt. See VIRSH(1). ``` # virsh list --all Id Name State ---------------------------------------------------- - debian shut off - freebsd-512ram-1cpu-20gb shut off - openbsd shut off - openbsd59 shut off - rhel7 shut off ``` Edit a guest/domain configuration: ``` # virsh edit my-vm ``` Starting, shuting down, rebooting, suspending/pausing, resuming/unpausing, and pulling the plug on a domain: ``` # virsh start my-vm # virsh shutdown my-vm # virsh reboot my-vm # virsh suspend my-vm # virsh resume my-vm # virsh destroy my-vm ``` Set a guest to autostart or not autostart: ``` # virsh autostart my-vm # virsh autostart --disable my-vm # virsh list --autostart ``` Take a snapshot of a domain, list snaphots, restore/revert to a snapshot, and delete a saved snapshot: ``` # virsh snapshot-create my-vm # virsh snapshot-list my-vm # virsh snapshot-restore my-vm snapshot-1 # virsh snapshot-delete my-vm snapshot-2 ``` http://libvirt.org/formatstorage.html > Libvirt provides storage management on the physical host through storage pools and volumes. > > A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to VMs as block devices. > > For example, the storage administrator responsible for an NFS server creates a share to store virtual machines' data. The system administrator defines a pool on the virtualization host with the details of the share (e.g. nfs.example.com:/path/to/share should be mounted on /vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed 'mount nfs.example.com:/path/to/share /vmdata'. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started. > > Once the pool is started, the files in the NFS share are reported as volumes, and the storage volumes' paths may be queried using the libvirt APIs. The volumes' paths can then be copied into the section of a VM's XML definition describing the source storage for the VM's block devices. https://libvirt.org/storage.html > Although all storage pool backends share the same public APIs and XML format, they have varying levels of capabilities. Some may allow creation of volumes, others may only allow use of pre-existing volumes. Some may have constraints on volume size, or placement. > > The top level tag for a storage pool document is 'pool'. It has a single attribute type, which is one of dir, fs, netfs, disk, iscsi, logical, scsi (all since 0.4.1), mpath (since 0.7.1), rbd (since 0.9.13), sheepdog (since 0.10.0), gluster (since 1.2.0), zfs (since 1.2.8) or vstorage (since 3.1.0). ``` # virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available ---------------------------------------------------------------------------------- data-libvirt running yes yes 468.45 GiB 179.47 GiB 288.98 GiB default running yes yes 422.38 GiB 213.00 GiB 209.39 GiB Downloads running yes yes 422.38 GiB 213.00 GiB 209.39 GiB # virsh pool-info virt Name: virt UUID: aa52fbc0-2088-43ce-9618-3a7993cb45e4 State: running Persistent: yes Autostart: yes Capacity: 468.45 GiB Allocation: 234.22 GiB Available: 234.23 GiB # virsh vol-list --pool virt-lvm-pool Name Path ------------------------------------------------------------------------------ home /dev/falstaff-vg/home lv-openbsd-30gb /dev/falstaff-vg/lv-openbsd-30gb lv-pi2-test /dev/falstaff-vg/lv-pi2-test root /dev/falstaff-vg/root swap_1 /dev/falstaff-vg/swap_1 # virsh pool-define-as test --type dir --target /data/virt # virsh pool-start test # virsh pool-autostart test # virsh pool-dumpxml virt test aa52fbc0-2088-43ce-9618-3a7993cb45e4 502996557824 251489689600 251506868224 /data/virt 0755 1000 1000 # virsh pool-destroy test Pool test destroyed # virsh pool-create /etc/libvirt/storage/test.xml Pool test created from /etc/libvirt/storage/test.xml # virsh pool-destroy test Pool test destroyed # virsh pool-undefine test Pool test has been undefined # virsh find-storage-pool-sources logical vg0 # sudo vgscan Reading all physical volumes. This may take a while... Found volume group "vg0" using metadata type lvm2 # vgs --all VG #PV #LV #SN Attr VSize VFree vg0 1 7 0 wz--n- 2.73t 1.91t $ ls /dev | grep vg0 vg0 # virsh pool-define-as --name vg0 --type logical --target /dev/vg0 # virsh pool-start vg0 Pool vg0 started # virsh pool-autostart vg0 Pool vg0 marked as autostarted # virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available --------------------------------------------------------------------------- default running yes yes 110.00 GiB 9.70 GiB 100.30 GiB vg0 running yes yes 2.73 TiB 841.56 GiB 1.91 TiB # virsh vol-create-as vg0 myvm.img 10G # virsh vol-delete --pool vg0 myvm.img # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert 2012r2 vg0 -wi-a--- 80.00g myvm.img vg0 -wi-a--- 10.00g lvroot vg0 -wi-ao-- 111.76g lvswap vg0 -wi-ao-- 29.80g smbshare vg0 -wi-ao-- 450.00g virtbackup vg0 -wi-ao-- 320.00g # vgs VG #PV #LV #SN Attr VSize VFree vg0 1 8 0 wz--n- 2.73t 1.63t VIRT-INSTALL(1) --os-variant OS_VARIANT Optimize the guest configuration for a specific operating system (ex. 'fedora18', 'rhel7', 'winxp'). While not required, specifying this options is HIGHLY RECOMMENDED, as it can greatly increase performance by specifying virtio among other guest tweaks. Use the command "osinfo-query os" to get the list of the accepted OS variants. # apt-get install libosinfo-bin $ osinfo-query os Short ID | Name | Version | ID ----------------------+----------------------------------------------------+----------+----------------------------------------- centos7.0 | CentOS 7.0 | 7.0 | http://centos.org/centos/7.0 debian9 | Debian Stretch | 9 | http://debian.org/debian/9 debiantesting | Debian Testing | testing | http://debian.org/debian/testing fedora25 | Fedora 25 | 25 | http://fedoraproject.org/fedora/25 freebsd11.0 | FreeBSD 11.0 | 11.0 | http://freebsd.org/freebsd/11.0 openbsd5.8 | OpenBSD 5.8 | 5.8 | http://openbsd.org/openbsd/5.8 rhel6.8 | Red Hat Enterprise Linux 6.8 | 6.8 | http://redhat.com/rhel/6.8 rhel7.0 | Red Hat Enterprise Linux 7.0 | 7.0 | http://redhat.com/rhel/7.0 rhel7.1 | Red Hat Enterprise Linux 7.1 | 7.1 | http://redhat.com/rhel/7.1 rhel7.2 | Red Hat Enterprise Linux 7.2 | 7.2 | http://redhat.com/rhel/7.2 ubuntu16.04 | Ubuntu 16.04 | 16.04 | http://ubuntu.com/ubuntu/16.04 ubuntu14.04 | Ubuntu 14.04 LTS | 14.04 | http://ubuntu.com/ubuntu/14.04 win10 | Microsoft Windows 10 | 10.0 | http://microsoft.com/win/10 win2k12 | Microsoft Windows Server 2012 | 6.3 | http://microsoft.com/win/2k12 win2k12r2 | Microsoft Windows Server 2012 R2 | 6.3 | http://microsoft.com/win/2k12r2 win2k3 | Microsoft Windows Server 2003 | 5.2 | http://microsoft.com/win/2k3 win2k3r2 | Microsoft Windows Server 2003 R2 | 5.2 | http://microsoft.com/win/2k3r2 win2k8 | Microsoft Windows Server 2008 | 6.0 | http://microsoft.com/win/2k8 win2k8r2 | Microsoft Windows Server 2008 R2 | 6.1 | http://microsoft.com/win/2k8r2 win7 | Microsoft Windows 7 | 6.1 | http://microsoft.com/win/7 win8 | Microsoft Windows 8 | 6.2 | http://microsoft.com/win/8 win8.1 | Microsoft Windows 8.1 | 6.3 | http://microsoft.com/win/8.1 winvista | Microsoft Windows Vista | 6.0 | http://microsoft.com/win/vista winxp | Microsoft Windows XP | 5.1 | http://microsoft.com/win/xp # virt-install \ --name=centos7 \ --disk pool=data-libvirt,cache=none,format=qcow2,size=16 \ --os-variant=centos7.0 \ --cdrom /home/paulgorman/Downloads/CentOS-7-x86_64-Minimal-1611.iso \ --vcpus=2 \ --ram=1024 \ --graphics spice \ --network bridge=br0 ``` Enable the Serial Console ---------------------------------------------------------------------- https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Troubleshooting-Troubleshooting_with_serial_consoles.html In the guest, edit `/etc/default/grub`, and append this to the GRUB_CMDLINE_LINUX="" value: `console=tty0 console=ttyS0,115200`. On Red Hat, run `grub2-mkconfig -o /etc/grub2.cfg`, and reboot. On Debian, run `update-grub`, and reboot. ``` # virsh console centos7-test ``` Find the IP address of a guest (without a console connection) ---------------------------------------------------------------------- ``` # virsh domiflist centos7 Interface Type Source Model MAC ------------------------------------------------------- vnet0 bridge br0 virtio 52:54:00:80:ef:da # nmap -sn 10.0.1.0/24 | grep -i -B2 52:54:00:80:ef:da Nmap scan report for 10.0.1.134 Host is up (-0.087s latency). MAC Address: 52:54:00:80:EF:DA (QEMU virtual NIC) ``` (Or `ip nei show dev br0`?) Attach More storage to a guest ---------------------------------------------------------------------- ``` # virsh attach-disk myguest /var/lib/libvirt/images/file.img vdb --cache none ``` Snapshots ---------------------------------------------------------------------- Move VM to Another Host Manually ---------------------------------------------------------------------- ``` host-a~# virsh shutdown my-vm host-a~# virsh dumpxml my-vm > my-vm.xml host-a~# scp my-vm.xml host-b: host-a~# scp /var/lib/libvirt/images/my-vm.qcow2 host-b:/var/lib/libvirt/images/ host-a~# virsh undefine my-vm host-a~# rm /var/lib/libvirt/images/my-vm.qcow2 host-b~# virsh define my-vm.xml host-b~# virsh start my-vm ``` Live Migration ---------------------------------------------------------------------- Live migration should be possible, even without shared storage. Without shared storage, it's necessary to pre-define identical storage at the destination. Know the root password for the destination (is there no way to "sudo" this? https://libvirt.org/remote.html). (So, remote root ssh logins need to be enabled?!) It may be necessary to open TCP ports 49152-49215 on the destination (or without opening the ports by passing the `--tunnelled` flag, if available). - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-KVM_live_migration-Live_KVM_migration_with_virsh.html - https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm-virtual-machines/ - http://wiki.libvirt.org/page/FAQ#Migration - https://wiki.libvirt.org/page/VM_lifecycle - http://libvirt.org/migration.html#uris ``` host-b# iptables -A INPUT -p tcp --match multiport --dports 49152:49215 -j ACCEPT host-a~# virsh vol-list vg0 Name Path ----------------------------------------- myvm.img /dev/vg0/myvm.img doorentry /dev/vg0/doorentry lvroot /dev/vg0/lvroot lvswap /dev/vg0/lvswap postoffice-data /dev/vg0/postoffice-data postoffice-os /dev/vg0/postoffice-os virtbackup /dev/vg0/virtbackup wolf /dev/vg0/wolf host-b~# sudo virsh vol-create-as vg0 myvm.img 10G host-a~# virsh migrate --copy-storage-all --persistent --live my-vm qemu+ssh://host-b/system ``` A migrate a non-running guest: ``` host-a~# virsh migrate --copy-storage-all --persistent --offline my-vm qemu+ssh://host-b/system ``` Spice/VNC console ---------------------------------------------------------------------- ``` # apt-get install spice-client-gtk # virsh domdisplay my-vm spice://127.0.0.1:5900 $ spicy --uri=spice://127.0.0.1:5900 # virsh vncdisplay my-vm :1 ``` (":1" seems to mean 5901.) Once we know the guest console port, SSH forwarding works for Spice or VNC too. ``` $ ssh -L :localhost: hypervisor.example.com $ ssh -L 5900:localhost:5900 hypervisor.example.com $ spicy --uri=spice://127.0.0.1:5900 ``` Or, through a jump host: ``` $ ssh -A -t jump.example.com -L 7900:localhost:6900 ssh -A -t hypervisor.example.com -L 6900:localhost:5900 $ spicy --uri=spice://127.0.0.1:7900 $ vncviewer 127.0.0.1::7900 ``` virt-top ---------------------------------------------------------------------- ``` # apt-get install virt-top ``` "3" shows block devices. VirtIO Drivers for Windows Guests ---------------------------------------------------------------------- Installing the network driver is straightforward. 1. Shut off the VM. (I guess we could add a second interface to the live VM with `virsh attach-interface`.) 2. Change the network interface model from "e1000" (or "rtl8139" or whatever) to "virtio". Do this via `virsh edit myvm`. 3. Spin up the VM. 4. Attach the VirtIO driver ISO to the image, and add the new drivers from inside the guest. $ wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.141-1/virtio-win.iso # virsh attach-disk myvm ~/virtio-win.iso hdb --type cdrom # virsh domiflistm myvm Installing the VirtIO disk bus driver is slightly less straightforward, but not a huge deal. We need to trick Windows into loading the driver by adding a small, temporary volume of the "VirtIO" type. 1. Create a stub volume, and attach it to the VM. We probably need to restart the guest. 2. Attach the VirtIO driver ISO to the image. In the guest's Device Manager, load the VirtIO driver for the new storage controller device. 3. Shut down the VM, and change the type of the main storage volume from IDE to VirtIO. Remove the stub volume. 4. Start the guest. That's it. ``` # virsh vol-create-as vg0 stub 1G # virsh attach-disk doorentry /dev/vg0/stub vdc --config # virsh detach-disk doorentry hdc --config # virsh vol-delete stub --pool vg0 ``` (Newer versions of `virsh attach-disk` can specify the bus explicitly with a `--targetbus virtio` flag. Older version only inferred the bus type based on the target name, like "hda", "sdb", or "vdc".) Delete an unwanted VM ---------------------------------------------------------------------- ``` # virsh undefine --nvram --remove-all-storage --delete-snapshots myvm ``` Links ---------------------------------------------------------------------- - http://libguestfs.org/ - https://wiki.libvirt.org/page/FAQ