Virtualizing Microsoft Lync 2013

It seems there is a bit of an uproar about a document Microsoft recently released called Planning a Lync Server 2013 Deployment on Virtual Servers. In that document, Microsoft makes some odd recommendations when virtualizing Lync, sync has disabling NUMA on physical servers and not using hyperthreading.

To address these concerns, VMware has published a really nice blog post that addresses the unusual guidance in the document.  You can read the blog post by clicking here. The author of the blog post was my co-presenter at VMworld as year and was a technical advisor on my book on virtualizing Microsoft apps, so he has very good credibility in this space.  I highly recommend reading it.

I agree with everything Deji says in his blog post but I wanted to add some additional thoughts.

Hyperthreading

I agree with Deji that we should always size virtual machines based on a host’s physical cores, not logical cores (despite how hyperthreaded cores are represented within vSphere).  That is true for business critical apps like Lync and basic workloads too.  A logical core is not the same as a physical core and we shouldn’t treat it that way.  And don’t forget that even if we don’t assign the logical cores to virtual machines, ESXi can still use them when managing its own processes. That can help overall system performance for Lync and all other workloads on the server.

NUMA

I can’t think of a single reason to disable NUMA.  Even if the workload doesn’t support it, ESXi does and will place VMs within NUMA nodes to increase performance.  I almost think the author of the document was confused and meant to say to disable node interleaving, which is how NUMA is referred to in many BIOS settings.  Disabling node interleaving = enabling NUMA.  A good example of this is Microsoft Exchange, which is not NUMA aware (unlike SQL Server which is).  You wouldn’t disable NUMA on the ESXi host system or a physical server running Exchange just because the application doesn’t support NUMA since Windows itself and ESXi can take advantage of it.

Resource Over-committment

The author makes a good point about over-committing resources (especially CPU) on Lync servers and the impact to performance.  On that I completely agree, though CPU reservations can alleviate that issue.  This is true with Lync just like it is true with Exchange Server and the requirement not to exceed 2:1 vCPU:pCPU.  It’s possible the author is loosely referring to processor affinity or what is sometimes called "CPU pinning."  I agree that you shouldn’t mess with processor affinity for Lync, but CPU reservations can be used to guarantee access to CPU resources.

Hopefully between my thoughts and the blog post from VMware you’ll see that there is nothing to worry about when virtualizing Lync.  It’s fully supported and you don’t need to change your standard practices in order to make it work.

Share This:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.