This is our second article in our series about tuning performance of your Windows Server 2008 R2 Hyper-V environment. In our first article we discussed the considerations you should make when selecting the hardware components for your Hyper-V server(s).
We mentioned in our first article that the way the hypervisor virtualizes the physical processors is by time-slicing between the virtual processors. Obviously, moving a workload into a virtual machine increases CPU usage. In this article we’ll discuss how to optimize the processor(s).
VM Integration Services
The VM Integration Services include enlightened drivers for the synthetic I/O devices, which significantly reduces CPU overhead for I/O compared to emulated devices. You should install the latest version of the VM Integration Services in every supported guest. The services decrease the CPU usage of the guests, from idle guests to heavily used guests, and improves the I/O throughput. This is the first step in tuning a Hyper-V server for performance. For the list of supported guest operating systems, see the documentation that is provided with the Hyper-V installation.
Hyper-V in Windows Server 2008 R2 supports a maximum of four virtual processors per VM. VMs that have loads that are not CPU intensive should be configured to use one virtual processor. This will avoid the additional overhead that is associated with multiple virtual processors, such as additional synchronization costs in the guest operating system. More CPU-intensive loads should be configured with 2 to 4 virtual processor if the VM requires more than one CPU of processing under peak load.
Windows Server 2008 R2 features enlightenments to the core operating system that improve scalability in multiprocessor VMs. Workloads can benefit from the scalability improvements in Windows Server 2008 R2 if they run in 2 to 4 virtual processor virtual machines.
Background Activity Best Practices
Minimizing the background activity in idle VMs releases CPU cycles that can be used elsewhere by other VMs or saved to reduce energy consumption. Windows guests typically use less than 1 percent of one CPU when they are idle. The following are several best practices for minimizing the background CPU usage of a VM:
· Install the latest version of the VM Integration Services.
· Remove the emulated network adapter through the VM settings dialog box (use the Microsoft synthetic adapter).
· Remove unused devices such as the CD-ROM and COM port, or disconnect their media.
· Keep the Windows guest at the logon screen when it is not being used.
· Use Windows Server 2008 or Windows Server 2008 R2 for the guest operating system.
· Disable the screen saver.
· Disable, throttle, or stagger periodic activity such as backup and defragmentation.
· Review the scheduled tasks and services that are enabled by default.
· Improve server applications to reduce periodic activity (such as timers).
· Use the Balanced power plan instead of the High Performance power plan.
The following are additional best practices for configuring a client version of Windows in a VM to reduce the overall CPU usage:
· Disable background services such as SuperFetch and Windows Search.
· Disable scheduled tasks such as Scheduled Defrag.
· Disable AeroGlass and other user interface effects (through the System application in Control Panel).
Setting the Processor Weight
Hyper-V supports setting the weight of a virtual processor to grant it a larger or smaller share of CPU cycles than average and specifying the reserve of a virtual processor to make sure that it gets a minimal percentage of CPU cycles. The CPU that a virtual processor consumes can also be limited by specifying usage limits. System administrators can use these features to prioritize specific VMs, but we recommend the default values unless you have a compelling reason to alter them.
Weights and reserves prioritize or de-prioritize specific VMs if CPU resources are overcommitted. This makes sure that those VMs receive a larger or smaller share of the CPU. Highly intensive loads can benefit from adding more virtual processors instead, especially when they are close to saturating an entire physical CPU.
Non-Uniform Memory Access (NUMA)
On Non-Uniform Memory Access (NUMA) hardware, each VM has a default NUMA node preference. Hyper-V uses this NUMA node preference when assigning physical memory to the VM and when scheduling the VM’s virtual processors. A VM performs optimally when its virtual processors and memory are on the same NUMA node.
By default, the system assigns the VM to its preferred NUMA node every time the VM is run. An imbalance of NUMA node assignments might occur depending on the memory requirements of each VM and the order in which each VM is started. This can lead to a disproportionate number of VMs being assigned to a single NUMA node.
Use Perfmon to check the NUMA node preference setting for each running VM by examining the \Hyper-V VM Vid Partition (*)\ NumaNodeIndex counter.
You can change NUMA node preference assignments by using the Hyper-V API (WMI) and accessing the NumaNodeList property of the Msvm_VirtualSystemSettingData class.
The hypervisor virtualizes the guest physical memory to isolate VMs from each other and provide a contiguous, zero-based memory space for each guest operating system. In general, memory virtualization can increase the CPU cost of accessing memory. On non-SLAT-based hardware, frequent modification of the virtual address space in the guest operating system can significantly increase the cost.
CPU Performance Counters
Hyper-V supports performance counters to measure the behavior and performance of the virtualization server. The standard set of tools for viewing performance counters in Windows includes Performance Monitor (Perfmon.exe) and Logman.exe, which can display and log the Hyper-V performance counters.
Microsoft recommends you measure CPU usage of the physical system by using the Hyper-V Hypervisor Logical Processor performance counters. The CPU utilization counters that Task Manager and Performance Monitor report in the root and child partitions do not accurately capture the CPU usage. Use the following performance counters to monitor performance:
· \Hyper-V Hypervisor Logical Processor (*) \% Total Run Time – The counter represents the total non-idle time of the logical processor(s).
· \Hyper-V Hypervisor Logical Processor (*) \% Guest Run Time – The counter represents the time spent executing cycles within a guest or within the host.
· \Hyper-V Hypervisor Logical Processor (*) \% Hypervisor Run Time – The counter represents the time spent executing within the hypervisor.
· \Hyper-V Hypervisor Root Virtual Processor (*) \ * – The counters measure the CPU usage of the root partition.
· \Hyper-V Hypervisor Virtual Processor (*) \ * – The counters measure the CPU usage of guest partitions.
In our next article we’ll discuss how to tune Hyper-V storage I/O performance.