Basically we still need a VMFS3 datastore as a "AUX" to shrink disks.
This is interesting! Basically, logically, in plain English.. what happens is since the VAAI datamover used is not at the ESXi layer, the storage doesn't know what is on the VMDK and _has-to_ copy everything. There is no chance for ESXi layer to figure out which blocks to drop!
The conditions - ESXi 5.x onwards (VMFS5) + VAAI capable/enabled storage, Thin Provisioned VMs
"...When the source filesystem uses a different blocksize from the destination filesystem, the legacy datamover (FSDM) is used. When the blocksizes of source and destination are equal, the new datamover (FS3DM) is used. FS3DM decides if it will use VAAI or just the software component. In either case, null blocks are not reclaimed"
Thanks to Boon Hong for highlighting this.
http://kb.vmware.com/kb/2004155
Just happy sharing nuggets. My Personal Wiki. Blog contains mostly technical stuff which may be of interest to some but mostly useful for me.
Showing posts with label storage. Show all posts
Showing posts with label storage. Show all posts
Tuesday, April 28, 2015
Wednesday, September 17, 2014
Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats
Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!
LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.
Original Post:-
While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.
My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.
However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)
The details of why "don't do that" follow:
This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.
(Don't know if this hotlink to the post on VMUG ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )
Here's my 2 cents. I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.
Appends:-
Roshan Jha: Posted a recent VMware blog article (which I did not see earlier).
It's VSAN related but relevant.
Which vSCSI controller should I choose for performance? - Mark Achtemichuk
Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "
Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:
"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"
"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."
Note that the article has made a distinction between a "device" and the "adapter".
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!
LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.
Original Post:-
While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.
My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.
However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)
The details of why "don't do that" follow:
This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.
(Don't know if this hotlink to the post on VMUG ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )
Here's my 2 cents. I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.
Appends:-
Roshan Jha: Posted a recent VMware blog article (which I did not see earlier).
It's VSAN related but relevant.
Which vSCSI controller should I choose for performance? - Mark Achtemichuk
Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "
Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:
"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"
"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."
Note that the article has made a distinction between a "device" and the "adapter".
Labels:
adapter queue depth,
caveats,
device queue depth,
disk,
how-to,
howto,
interfaces,
LSI SAS,
optimzations,
PVSCSI,
queue depth,
storage,
tuning,
VM,
VMware,
VMware Tools,
VSAN,
vSphere
Subscribe to:
Posts (Atom)
