Microsoft responds to VMware performance benchmark report
- Matt Liebowitz
VMware recently commissioned a study comparing performance between vSphere 5 and Hyper-V 2.0 SP1. The study involves virtual machines running a database simulation workload with 24 VMs (without memory overcommit) and 30 VMs (with memory overcommit). The results show that VMware vSphere outperforms Hyper-V in each scenario.
The original performance report can be found here (opens a PDF): http://www.principledtechnologies.com/clients/reports/VMware/vsphere5density0811.pdf
Microsoft posted a response to the report, but strangely they did not post it on a TechNet blog or other Microsoft branded site. Instead they uploaded it to Papershare, an online collaboration website for sharing technical papers. (Side note: Papershare is awesome and I highly recommend you sign up). All due respect to Papershare but I think it is obvious that they would get more readers had they simply posted the results on their blog. Maybe they did but I can’t find it.
Here is Microsoft’s response, called VMware vSphere 5.0 Performance Benchmark Reality (Papershare login required): http://www.papershare.com/app/paper.aspx?id=1253&o=6
I wanted to share some thoughts on Microsoft’s findings and where I agree/disagree. Though I admit this is a VMware focused blog I am by no means a Microsoft basher or someone who simply says “Hyper-V sucks.” I think that Hyper-V has come a long way and there are definitely use cases for it.
One of the issues that Microsoft has with the memory overcommit test is that a 60 minute idle period was introduced before any testing was run. That is, the hypervisors were booted up and all virtual machines were started and then 60 minutes passed before any benchmark was run. Microsoft states that this gives VMware’s Transparent Page Sharing feature a chance to scan for and de-duplicate memory pages in RAM and so it wasn’t a fair or realistic test. They also state that the test is unrealistic because it is 30 identical workloads (ideal for memory sharing) and that most organizations do not run memory overcommit in production.
Finally, Microsoft states that VMware’s EULA “restricts” Microsoft from running similar performance benchmarks to validate these results. Microsoft raises a few other points that I won’t address here so I’d recommend reading the paper.
I actually agree with Microsoft I don’t often see memory overcommit in production. I most often see it in VDI environments where VM density is more important than the performance gain from using large memory pages.
That said, I disagree with Microsoft’s assertion that the 60 minute idle period gave the vSphere VMs an unfair advantage and time to share memory. First off, does Microsoft believe that organizations only run their production workloads in 59 minute increments? Even if they disagree with the 60 minute idle period they have to understand that in a production situation these VMs would be running 24/7 so a 60 minute head start really doesn’t give vSphere any advantage.
Second, if the VMs are sitting idle and the application isn’t running then the application’s memory pages are not in physical RAM so there is nothing to share. Only after the performance test was started and the application loaded memory pages into RAM did page sharing really have a chance to kick in and start sharing application specific memory pages. At that point the 60 minute idle period essentially made very little difference.
In my experience I don’t often see organizations overcommit memory with business critical/tier 1 applications, though there are certainly use cases. If the workloads are identical and VM density is important then it actually does make sense.
With respect to the EULA rule regarding performance benchmarks (that has been in place since at least ESX 2.x), my advice to Microsoft is this: Don’t hide behind the EULA – run the test and submit the results to VMware for approval. I have actually done this in the past and was pleasantly surprised at how easy the process was. If your testing methodology is fair then I believe VMware will allow publication even if it doesn’t show vSphere coming out ahead.
The results show that in this particular workload without memory overcommit that Hyper-V isn’t too far behind vSphere 5. Congrats to Microsoft for improving Hyper-V and bringing it to a much higher level. Of course straight performance is only one aspect of the differences between vSphere and Hyper-V and this article doesn’t touch on the many feature differences.
I’m all for fair benchmarking to compare products and reasonable rebuttals if necessary. However, the fact that Microsoft didn’t publish this rebuttal on their blog is strange, and I don’t completely agree with the logic they used to counter the results. I strongly believe that if Microsoft disagrees with the results then they should run their own benchmark and submit it to VMware for approval to publish.