Wednesday 2 October 2013

How to Use Unified vMotion: New Look, Enhanced Features

You probably know that VMware ESXi is a great hypervisor that can host multiple isolated virtual machines each with its own operating system and each with the ability to run its own applications. VMware vSphere goes further towards reducing costs and maximizing IT efficiency by reducing the potential lost revenue of downtime, outages and failures. This is achieved by enabling a number of features through VMware vCenter Server or the vCenter appliance, which we have installed in earlier articles.
vMotion is one of the most elementary of these features and a basic requirement for other more advanced features. I remember years ago watching demos showing a virtual machine being moved from one host to another without losing more than a single ping and considering it some sort of technical magic. Now vMotion is so common that at any given time more VMs are in motion than planes in flight (estimates are 6 vMotions per second in datacenters worldwide), yet it is still as impressive.
vMotion Moves Live
Thanks to VMware for the figure showing vMotion moves live, running virtual machines from one host to another while maintaining continuous service availability.

Traditional vMotion

Traditional vMotion keeps the disks on a shared storage that both hosts have access to, while moving the virtual machine active memory and processing. During the movement, vMotion keeps track of on-going memory transactions in a bitmap, and then copies the bitmap at the end of process. vMotion also initiates ping from the VM as soon as it is moved to announce the new location to network devices.
The primary use for vMotion is evacuating a host in preparation for scheduled maintenance or performing patching. It can also be used to balance the load between different hosts by moving VMs off busy hosts to less busy hosts (manually or automatically by DRS). It can even be used to consolidate VMs on fewer hosts when activity is low to turn the extra hosts off and save power (manually or automatically by DPM).

Requirements are Simple

vSphere License

The vSphere license on the hosts should include vMotion. vMotion is supported with vSphere Essentials Plus, Standard, Enterprise or Enterprise Plus (all editions except free and essentials).

Shared Storage

Shared Storage had been a requirement for traditional vMotion, as the state of the virtual machine is encapsulated in a number of files during the process. This requirement has been removed with enhancements made in vSphere 5.1 as we will discuss at the end of the article.

Network Labeling

In an earlier article, we have discussed the importance of proper virtual network labeling. vMotion requires the same virtual machine network names configured on the source and destination host to even start. Also, make sure that the network’s properties are the same, including VLANs and other security settings, to insure consistent operation.
Network Labeling

CPU Compatibility

An important requirement is also a reasonable degree of CPU compatibility between the source and destination hosts. vMotion between an AMD and Intel host is an almost impossible dream. Even when dealing with CPUs from the same vendor they need to be from the same family. Enhanced vMotion Compatibility (EVC) can mask some CPU features, making all hosts offer a lowest common denominator to the VMs. EVC is a feature of clusters and, as such, we will discuss it when we discuss VMware DRS clusters.

VMkenrel

The last requirement is a VMkenrel interface on both ESXi servers with vMotion enabled. Notice that in this case we already have a VMkernel port configured for this host that is being used by management traffic. It is recommended to create another separate VMkernel port for vMotion, preferably using a dedicated network card (or multiple cards) for migration. This can be configured as follows:
Add Networking
Right click on the ESXi host, and chose “Add Networking.”
VMkernel Network Adapter
Obviously, on the next screen go with the first choice: VMkernel Netowk Adapter.

I strongly prefer to create a new standard switch for this to keep things separate and avoid any interference from other settings.
Add Network Card
You have to add at least a network card to the active adapters on the switch.
Available Adapters
You can choose from the list of available adapters.
Proceed
Proceed forward onto the next screen.
Enable vMotion
This is the most critical step in the whole process. Enable vMotion service for this port, and as always, it is a good idea to provide a descriptive network label that illustrates the purpose of the port.
Provide IP Address
Provide a proper IP address for the new port. Since this network will live on a cable directly connected between the two hosts, I chose 192.168.69.0/24 subnet for it.
Finish
To complete the requirements for vMotion, hit finish of the last screen in this wizard.
vSwitch
You should now have a new vSwitch with a VMkernel port dedicated for vMotion.
Meet Requirements
As expected, all requirements should be met on all hosts where you are planning to perform vMotion.

Preforming the Magic Trick

At this point, the real work is done and you can start vMotioning your VMs around by right clicking on them and choosing migrate
vMotioning
To perform traditional vMotion, you pick the “Change host” option to move the VM to another host.
Change Host
Then you pick the destination host (or Cluster). A compatibility check will automatically be performed to make sure that the destination meets the requirements of vMotion and can host the to-be-moved VM. If it does not, then the wizard will not allow you further, as we saw above when we tried to move a VM to a host that does not have the needed virtual network.
Destination Host
If you have extra busy hosts, on the following screen you can chose to be extra careful and only move the VM when there is sufficient CPU resources for the process. Otherwise, the process may take significantly longer to complete.
CPU Resources
There is nothing left to do but to take-off. If you are in the mood, you can monitor a continues ping to the VM while in vMotion.
Review Selections
You should not notice more than one ping lost near the end of the process. At this stage the VM state is suspended and the final bitmap of memory changes is moved to the destination host.
Recent Tasks
Before long (20 seconds in this case), the VM will be living happily on the destination host.
Summary Screen

Storage vMotion

In contrast to traditional vMotion, storage vMotion moves VM virtual disks from one datastore to another while the VM memory and processing stays on the same host.
This feature is used for balancing the storage utilization by moving virtual disks off nearly-full or over utilized storage to less utilized ones. It can also be used to evacuate virtual disks of SANs that need maintenance or are scheduled to be replaced. Alternatively, it is used to separate different disks like database logs from other disks of the same VM to change the disk format of the virtual disk, as we will see shortly.
This is considered a more advanced feature than traditional vMotion and therefore it used to be only available to vSphere Enterprise or Enterprise Plus customers as they are the most likely to have many datastores in their environments. With vSphere 5.1 pricing this great feature it has become available for the standard edition as well, but not (yet) available on the essential kits which are targeted towards small and medium size business (SMBs).
Storage vMotion
Thanks to VMware for this diagram that illustrating Storage vMotion
You start storage vMotion like you start traditional vMotion by right clicking on a VM and choosing “migrate.” However, this time your aim is to “Change datastore.”
Change Datastores
You simply pick a destination datastore that has enough disk space to host the virtual disks and is compatible with needs of the VM. The advanced button allows you to pick different datastores for the VM configuration files and any of the virtual disks attached to it.
Notice that at this point you can change the format of the virtual disk between thin and thick. As far as I know, there is no other place in the GUI where you can do this.
Format
That is it: The Storage vMotion truck is ready to start moving the large virtul disks from one datastore to another while the machine is running. This process may take minutes or even hours depending on the size of the virtual disks and their level of activity.
Migrate

Enhanced or Unified vMotion

Prior to vSphere 5.1 you could only move a VM from it host and datastore at the same time if you turned it off. This meant that unless you had a shared storage (which can be expensive for most of the SMBs), you would not be able to perform vMotion.
vSphere 5.1 combines vMotion and parts of the Storage vMotion code to enable you to migrate VMs between hosts that do not share storage. This great enhancement was not branded by VMware as a separate feature and is available in all kits and editions that have vMotion included.
Storage vMotion
A diagram from VMware showing the new enhanced vMotion.

Conclusion

VMware vMotion is still as impressive as it was in 2003 when it was first introduced. For some time it has served VMware as one of its competitive advantages. Microsoft was not able to offer similar functionality with Hyper-V in Windows 2008 and had to wait for the R2 revision to offer true “live migration.”
Although today this functionality is a common offering by virtualization vendors (some even offer it with free editions of their products like XenServer), VMware claims the best performance and exaction.
Regardless of this debate, vMotion is also the basis for many more advanced features offered by VMware like DRS and DPM.

0 comments:

Post a Comment