Wednesday, September 17, 2014

Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats

Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!

LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.

Original Post:-

While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.

My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.

However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)

The details of why "don't do that" follow:

This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.

(Don't know if this hotlink to the post on VMUG  ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )





Here's my 2 cents.  I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.

Appends:-

Roshan Jha: Posted a recent VMware blog article (which I did not see earlier). 
It's VSAN related but relevant.

Which vSCSI controller should I choose for performance?  - Mark Achtemichuk

Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "

Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:

"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"

"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."

Note that the article has made a distinction between a "device" and the "adapter".

No comments:

Post a Comment