In our last article we discussed performance tuning of Hyper-V storage I/O. In this article we’ll focus on network I/O performance best practices.
Hyper-V supports synthetic and emulated network adapters in the virtual machines, but in general the synthetic devices offer much better performance and also reduce CPU overhead. Both types of adapters are connected to a Hyper-V virtual network switch. If you need external network connectivity you can connect the virtual switch(es) to a physical network adapter.
Let’s start by taking a look at synthetic network adapter configuration.
The Hyper-V synthetic network adapter is designed specifically for virtual machines to achieve reduced CPU overhead on network I/O compared to an emulated network adapter mimicking existing hardware. The synthetic network adapter communicates between the child and root partitions over VMBus by using shared memory for more efficient data transfer.
For best performance, the emulated network adapter should be removed through the VM settings dialog box and replaced with a synthetic network adapter. Remember that the VM integration services need to be installed.
Perfmon counters representing the network statistics for the installed synthetic network adapters are available under the counter set \Hyper-V Virtual Network Adapter (*) \ *.
Virtual machines with more than one virtual processor might benefit from having more than one synthetic network adaptor installed into the virtual machine. Network intensive workloads such as a Web server, benefit by having a second synthetic network adapter because use of parallelism in the virtual network stack.
We touched on this in our first article “Hardware Selection”. Offload capabilities in the physical network adapter reduce the CPU usage of network I/Os in virtual scenarios. Hyper-V supports LSOv1 and TCPv4 checksum offload. To utilize these settings you must enable the driver settings for the physical network adapter in the root partition. Make sure to explicitly enable LSOv1.
Hyper-V supports creating multiple virtual network switches, each of which can be attached to a physical network adapter if needed. Each network adapter in a VM can be connected to a virtual network switch. If the physical server has multiple network adapters, VMs with network-intensive loads can benefit from being connected to different virtual switches to better use the physical network adapters.
Perfmon counters representing the network statistics for the installed synthetic switches are available under the counter set \Hyper-V Virtual Switch (*) \ *.
Hyper-V’s synthetic network adapter supports VLAN tagging. It provides significantly better network performance if the physical network adapter supports NDIS_ENCAPSULATION_IEEE_802_3_P_AND_Q_IN_OOB encapsulation for both large send and checksum offload.
Windows Server 2008 R2 supports VMQ-enabled network adapters which can maintain a separate hardware queue for each VM, up to the limit supported by each network adapter.
Use the Hyper-V WMI API to make sure that the virtual machines are assigned a hardware queue.
Windows Server 2008 R2 supports VM Chimney which will benefit those network connections that have a long lifetime the most. Make sure that for your virtual machines having this type of network load you enable VM Chimney.
Live migration is the process that allows you to move running virtual machines from one node of a the failover cluster to another node without dropping network connection
Best practice is to provide a dedicated network for Live Migration traffic. This helps to minimize the time required to complete a migration.
Another option to improve migration performance is by increasing the number of receive and send buffers on each network adapter that is involved in the migration.