A few weeks back I was called in to help a customer who was experiencing problems completing Jetstress testing for an Exchange 2010 deployment. It wasn’t an issue of Jetstress reporting failed tests. Rather, they were unable to get through most of their tests without the Jetstress application actually crashing (JetstressWin.exe has stopped working). They would see the following after the Jetstress testing completed but before it could write any log files to disk.
The only Jetstress related error in the Application log was an ESE error with Event ID 482:
JetstressWin (3584) Instance3584.6: An attempt to write to the file “F:\DB\Jetstress006001.edb” at offset 63087017984 (0x0000000eb0478000) for 32768 (0x00008000) bytes failed over 0 seconds with system error 1117 (0x0000045d): “The request could not be performed because of an I/O device error.”. The write operation will fail with error –1022 (0xfffffc02). If this error persists then the file may be damaged and may need to be restored from a previous backup.
During the process of Jetstress completing a test run, it generates a large amount of I/O as it flushes anything in cache to disk. It was at this point that the Jetstress application was crashing. This behavior is normal but it’s an important clue because of the high disk I/O generated.
The customer was using vSphere 4.1 and the Exchange 2010 Mailbox servers were each configured with PVSCSI virtual SCSI controllers using VMDK files. As it turns out, they were hit with the PVSCI bug described in this VMware KB:
Windows 2008 R2 virtual machine using a paravirtual SCSI adapter reports the error: Operating system error 1117 encountered http://kb.vmware.com/kb/2004578
The interesting thing to note here is that although Exchange is specifically called out here in the KB, it doesn’t mention that it may cause the application (in this case Jetstress) to crash. The crashing led the team to troubleshoot Jetstress initially, thinking something was wrong with Jetstress and the various DLLs it requires to run.
At the end of the day the issue was resolved by following the instructions in the KB and changing the virtual SCSI driver to LSI Logic SAS. After making that change there were no subsequent issues with Jetstress.
In case you haven’t read the KB linked above, I want to note that this issue is resolved in all versions of vSphere from 4.1 to 5.0. You’ll need to install the updates described in the KB if you want to use the PVSCSI driver and vSphere 4.1 through 5.0 (it is resolved in vSphere 5.1).
Hopefully this helps anyone who might be experiencing this issue. I also hope it doesn’t dissuade anyone from using the PVSCSI driver for their business critical applications, as it can deliver better performance with lower CPU utilization when high I/O workloads are virtualized.
Are you looking to start testing Windows Server 2012 in your vSphere environment and want to utilize the PVSCSI driver for better potential disk performance and lower CPU utilization? You may have noticed that the only PVSCSI drivers available for Windows stop at Windows 2008. But will that work on your Windows 2012 VM?
In short, it sure will! I gave it a try and can confirm that loading the Windows 2008 PVSCSI driver on your Windows 2012 VM will allow it to load the driver and use the disk. I’m sure at some point VMware will release an updated version of the driver specific to Windows 2012, but for now this works just fine.
The following post covers how to load the PVSCSI driver on Windows 2012. If you’ve seen this procedure for Windows 2008 or previous versions then there won’t be any surprises here.
Just like with Windows 2008 (or any other Windows VM), you’ll need to load the correct virtual floppy image that contains the PVSCSI drivers. In our case, we load the “pvscsi-Windows2008.flp” image in our virtual floppy drive as seen below.
Don’t forget to either select “Connect at power on” on the virtual floppy device or remember to go back in and select “Connected” after the VM has been powered on.
Next, when you boot your VM from the Windows Server 2012 ISO you’ll notice that it is unable to find any hard disks. Select the Load Driver option to load the driver.
It will scan your virtual floppy disk and report that it found a compatible driver.
Once you select Next, you’ll see that the Windows 2012 installer can correctly read your virtual hard disk.
Finally, once you’re in Windows 2012 you can see in Device Manager (that is, if you can figure out how the heck to launch it) that it reports the correct SCSI controller in Device Manager.
Along with the release of Windows 2012 will be a flood of other new applications to go with it, like Exchange 2013, Lync 2013, and the already released SQL 2012. Using the PVSCSI driver can help improve performance and lower CPU utilization, especially on workloads that are heavy consumers of disk I/O. Good to know we won’t have to wait for a newer PVSCSI driver in order to use it on Windows 2012!
Update 10/2/2012 - A comment on this post asked if this still worked on ESXi 5.1, as I actually did the test on ESXi 5.0. After finally upgrading my home lab I can confirm that this does indeed still work with ESXi 5.1, hardware version 9, and the RTM version of Windows Server 2012. I'm surprised to see that there isn't a specifically labeled PVSCSI driver floppy image for Windows Server 2012 in ESXi 5.1, but good to see the Windows 2008 FLP image still works just fine.
As I keep digging into documents and KB articles I keep finding more and more things to like about vSphere 4.1. Today's find has to do with the PVSCSI driver.
With the release of vSphere 4.0, VMware added a new paravirtualized SCSI driver into the VMware Tools that provides better virtual disk performance than the standard LSI driver. The PVSCSI driver promised to deliver better performance and lower overall CPU utilization for workloads that had high I/O demands. Unfortunately the PVSCSI driver wasn't supported on virtual machine boot volumes, so folks held off on making this the default SCSI driver for all virtual machines.
After vSphere 4 Update 1 was released, VMware lifted the restriction and now supported the PVSCSI driver on boot volumes. Folks began considering adopting the PVSCSI driver in all virtual machines similar to how the VMXNET driver is a standard for nearly all virtual NICs. Soon afterwards VMware came out with a knowledgebase article stating that virtual machines that did not have heavy I/O demands could actually experience worse performance using the PVSCSI driver. They recommended only using the driver for workloads that had I/O demands in excess of 2,000 IOPS.
With the release of vSphere 4.1 that is no longer a problem and you can use the PVSCSI driver in all circumstances. Want details? Read on!