In this article we’ll discuss best practices for deploying and managing a VDI environment based on Citrix XenDesktop and Microsoft Server 2008 R2 Hyper-V.
Microsoft’s Hyper-V is a free product that was first released with Windows Server 2008. It is a hypervisor-based platform supporting virtual guest operating systems. With the release of Windows Server 2008 R2, new features were introduced to Hyper-V. One neat feature is live migration. This allows you to migrate a running virtual machine (VM) from one physical host to another. You can also add or remove storage from a VM while it is running.
If you’re still in the design phase of your VDI, Citrix has a design guide available, “XenDesktop Design Guide for Microsoft Windows 2008 R2 Hyper-V”. Besides the required hardware, you will need Microsoft System Center Virtual Machine Manager R2 (SCVMM), Citrix Desktop Delivery Controller (DDC), and Citrix Provisioning Services (PVS). Let’s take a brief look at what each of these software components does.
· Microsoft SCVMM enables you to manage Hyper-V hosts in your environment using administrative console that are installed on the DDC and PVS servers. SCVMM will help you to centralize management of the physical and virtual infrastructures and optimize system resources across Hyper-V and other platforms.
· Citrix DDC maps users to a specific desktop group and assembles a vDesktop based on the access policy that is assigned to those users in Active Directory. The desktop group that a user is mapped to published the vDesktop to the end-users. Some of the other responsibilities of the DDC include; power vDesktops on and off, monitor availability of vDesktop, and manage the reassignment of a vDesktop after a user ends a session.
· Citrix PVS makes it possible for many users to share the same desktop image. PVS streams the image to the vDesktops in the form of a vDisk. Without PVS, administrators would have to maintain each virtual machine individually. This means that a single vDisk can be accessed by hundreds of VMs.
Optimizing Citrix PVS
When implementing a VDI environment, one of the challenges is properly configuring Citrix PVS. We’ll discuss in this section a little bit how PVS works will suggest some best practices for configuring it.
PVS is based on software-streaming technology. It allows you to create a reference image on a physical workstation, create a vDisk based on this system, and save the vDisk to the PVS server. Once the vDisk is saved to the network, any client that wants to use this image does not require its own hard drive since it can boot directly from the network. PVS will stream the vDisk’s content directly to the client, on-demand and in real time. It behaves exactly the same as if it were running from a local hard drive. Also note that processing takes place on the client computer. Looking at the operational process step by step, the following describes how a vDisk image is downloaded to a client system:
1. Client system obtains an IP address from the DHCP server as well as the address of a TFTP server
2. The client system contacts the TFTP server and downloads the Network Bootstrap Program (NBP). This program contains the IP addresses of the PVS server(s).
3. The NBP then installs an I/O redirector and the PVS protocol onto the client
4. The client system contacts the PVS server. If you have more than one PVS server, it contacts the first available.
5. The PVS server identifies the client using its MAC address and checks which vDisk is assigned to the client. The vDisk contains the image of the operating system for the client (for example, Windows 7)
6. The client system then mounts the vDisk over the network and starts using the operating system in the same way it would if the OS was local.
Best practice: Configuring Citrix PVS on a network
Disable Spanning Tree or enable Portfast. With spanning tree enabled the network ports are placed into a blocked state while the network switch transmits BPDUs and checks that BPDUs are not in a loopback configuration. Depending on the time it takes to complete this process, PXE requests might time out.
Citrix recommends disabling STP on edge-ports connected to clients or enable Portfast or Fastlink depending on the brand of the managed switch.
Disable Large Send Offload. The TCP Large Send Offload option allows messages up to 64KB that are then segmented into multiple TCP packets in either 1500-byte frames or up to 9000-byte frames for a MTU of 9000 when jumbo frames are enabled. This re-segmenting of TCP packets can cause latency and timeouts to the PVS server. Therefore, Large Send Offload should be disabled on all PVS servers and clients.
Disable Auto Negotiation. Having Auto Negotiation enabled on the NIC can cause long boot times and PXE timeouts, in particular when booting multiple clients. Citrix recommends configuring the network speed on the NICs in the PVS servers, clients, and on the network switch manually.
Best practice: PVS memory and cache
It should be needless to say that it is imperative for the performance of the PVS server that it is configured with sufficient memory so it can cache the vDisk(s). Depending on available memory, the PVS server will cache vDisk contents in system cache memory and service client vDisk request from memory vs. from disk. Citrix provides a whitepaper that offers a formula to determine memory and storage requirements.
Best practice: PVS write cache
Depending on the specific needs for your VDI setup, there are different Access Modes available:
· Standard Mode
· Private Mode
· Differential Mode
The choice of access mode makes no difference to the amount of disk IOPS, however the selected mode will impact where disk activity takes place and can affect vDesktop performance. In private and differential mode, the disk writes occur on the vDisk; in Standard Mode, disk writes occur on the write-cache file.
Write-cache location. The location of the PVS write-cache file can impact server as well as client performance. There are different locations possible:
· PVS server local disk. Although easy to set up, this is slow because of the time it takes to traverse the network.
· Client’s memory. This option offers the best performance, however, if the write cache fills up, the client will crash. In addition RAM is more expensive than disk space.
· Client’s hard drive. This option offers performance similar to a normal set up in a non-virtual environment. The response time is slower than RAM but a lower cost is involved.
If the write-cache is placed on local disks in the Hyper-V server, use either RAID 1 or RAID 10 to optimize performance. If the cache is located on a SAN, configure it with RAID 10.
Write-cache configuration. The typical recommendation for a write cache is about 2-3GB. Consider configuring the write cache to clear its contents during a shutdown, preventing it from continuing to grow in size.
Best practice: Hyper-V configuration
Use processors that support Second Level Address Translation (SLAT). Because Hyper-V presents the logical processors as one or more virtual processors to each virtual machine, you can gain additional efficiency by using processors that support SLAT.
Use processors with large caches. Hyper-V will benefit from larger processor caches especially when there is a high ratio of virtual processor to logical processors.
Install multiple NICs. If you expect intensive network loads, consider installing multiple network adapters. Each adapter will be assigned to its own virtual switch.
Configure fixed vDisks. Citrix recommends the vDisks are configured as regular Windows fixed disks and not Windows dynamic disks. The reason for this is that dynamic vDisks include an extra byte at the end of the file that causes the vDisk .vhd file to be out of alignment with the disk subsystem, degrading disk performance significantly.
The best practices we discussed in this article do not represent an end-all be-all list of the best practices to follow. Setting up a VDI is complex and many factors are involved when configuring for optimal performance. However, the topics we discussed in this article apply to most situations and will give you a start in the right direction.
In our next article we’ll discuss performance monitoring of a XenDesktop environment on Hyper-V.