Kvm ha cluster how to have a basic face

Live migration of VMs with KVM

The kernel-based hypervisor KVM (Kernel-based Virtual Machine) has long gained the upper hand on Linux systems. The reason for this is that KVM is available in all common Linux distributions without additional configuration and the large distributors such as Red Hat and Suse are continuously working on improving KVM. For most users, the upper performance limits in terms of memory usage and the number of VMs are probably less interesting than bread and butter features that the competitor VMware offered when KVM was still in its infancy.

In the meantime, KVM has caught up and has been offering live migration of virtual machines [1] for several years, which is a prerequisite for reliably virtualizing services. This is because VMs and services can be migrated from one server to another, which is useful for load distribution or hardware updates, for example. The high availability of services can also be realized more easily with the help of live migration. Live migration means that the virtual machines continue to run, except for the shortest possible time. Ideally, client computers that use a VM as a server will not notice a migration.

The live migration is implemented using a number of sophisticated technologies that are intended to ensure that, on the one hand, the interruption of the service is as short as possible and, on the other hand, the state of the migrated machine corresponds exactly to that of the original machine. To do this, the hypervisors involved start the transfer of the RAM memory and at the same time begin to monitor the activities of the source machine. If the remaining changes are small enough to be transferred in the allotted time, the original VM is paused, the memory that has not yet been transferred is copied to the target and the new machine is started there.

Shared storage required

A prerequisite for the live migration of KVM machines is that the disks involved are on shared storage, i.e. data storage that is shared between the source and destination host. This can be achieved, for example, with NFS, iSCSI, Fiber Channel, but also with distributed or cluster file systems such as GlusterFS and GFS2. Libvirt, the abstraction layer for managing different hypervisor systems in Linux, manages data storage in so-called storage pools. In addition to the technologies mentioned, these can also be conventional disks as well as ZFS pools or sheepdog clusters.

The different technologies for shared storage offer different advantages and require more or less configuration effort. If you use storage appliances from well-known manufacturers, you can use their network storage protocols, i.e. iSCSI, Fiber Channel or NFS. For example, NetApp Clustered ONTAP also offers support for pNFS, which is supported by newer Linux distributions such as Red Hat Enterprise Linux or CentOS 6.4. iSCSI, on the other hand, can be operated redundantly thanks to multipathing, but the appropriate network infrastructure must be available again. Fiber Channel is the most expensive in this regard, as it requires its own SAN (Storage Area Network). There is also a copper-based alternative with FCoE (Fiber Channel over Ethernet), which is actually only offered by the Mellanox company. An even more exotic solution is SCSI RDMA over Infiniband or iWARP.

Set up a test environment with NFS

For the first tests, it is also a size smaller, i.e. with NFS, because such storage can also be implemented with household resources. For example, with a Linux server based on CentOS 7, an NFS server can be installed in next to no time. Because the NFS server is largely implemented in the Linux kernel, the only thing missing is the RPCBind daemon and the necessary tools for managing NFS, which can be installed using the "nfs-utils" package. The directories made available by the server ("exported") are listed in the file "/ etc / exports". The following call starts the NFS service:

systemctl start nfs

A call to "" provides information about success or failure. The currently exported directories are listed by »«. In principle, the syntax of the export file is simple: the exported directory followed by the IP address or host name of those computers that are allowed to mount the directory, and after that the options that define, for example, whether the shares are only for reading or also are released for writing. Of course there are quite a few of these options, as a look at the man page for "exports" shows. In order to write to the network drives from other computers with root rights, you must allow this with the "no_root_squash" option. A corresponding line in "/ etc / exports" would look something like this:

/ nfs 192.168.1.0/ 24 (rw, no_root_squash)

This means that all computers from the 192.168.1.0 network with root rights can mount the "/ nfs" directory and write to it. You can then restart the NFS server or simply notify it of the changes with "". If you now try to mount the directory from the said network from a computer, it may fail because you still have to adjust the firewall on the server, i.e. open TCP port 2049.

Image 1: The virtual machine "fedora-22" was transferred from the computer node2 to node1 in just under three seconds.
comments powered by Disqus