Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!
LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.
Original Post:-
While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.
My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.
However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)
The details of why "don't do that" follow:
This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.
(Don't know if this hotlink to the post on VMUG ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )
Here's my 2 cents. I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.
Appends:-
Roshan Jha: Posted a recent VMware blog article (which I did not see earlier).
It's VSAN related but relevant.
Which vSCSI controller should I choose for performance? - Mark Achtemichuk
Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "
Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:
"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"
"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."
Note that the article has made a distinction between a "device" and the "adapter".
Just happy sharing nuggets. My Personal Wiki. Blog contains mostly technical stuff which may be of interest to some but mostly useful for me.
Showing posts with label VMware Tools. Show all posts
Showing posts with label VMware Tools. Show all posts
Wednesday, September 17, 2014
Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats
Labels:
adapter queue depth,
caveats,
device queue depth,
disk,
how-to,
howto,
interfaces,
LSI SAS,
optimzations,
PVSCSI,
queue depth,
storage,
tuning,
VM,
VMware,
VMware Tools,
VSAN,
vSphere
Tuesday, June 24, 2014
VMware vSphere Snapshots (draft-WIP)
This post aims to condense and place into a single page important information with regards to snapshots, svmotion (snapshots are used), cloning (snapshots used there too!) and some general issues and questions which I've encountered in my working environment. (quiescing errors, during Avamar backup, during cloning of "hardened" windows GOS)
I started out looking for supporting articles but ended up going in and out of KBs and losing track of what belongs to what, where belongs to where. Hence this post. It's mostly my notes of what I think will be useful and important while troughing through the maze of KB articles.
Start here (Understanding how Snapshots work on different versions of ESX/ESXi)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1015180

When performing Storage vMotion
http://blogs.vmware.com/vsphere/2011/09/storage-vmotion-storage-drs-virtual-machine-snapshots.html
"It should also be noted that if you do a Storage vMotion of a VM with snapshots and the VM has the workingDir parameter set, theworkingDir setting will be removed from the .vmx & the .vmsn snapshot data file will be moved to the home folder of the VM on the destination datastore. You do get a warning in the migration wizard about this"
"Therefore, if you use the snapshot.redoNotWithParent = "TRUE" parameter, you should refrain from doing Storage vMotion operations."
This happens regardless even if you set the parameters above - in other words, try as best as possible to avoid putting the snapshot files on a datastore away from the parent -flat file disks if all the datastores involved are backing an SDRS cluster...
Troubleshooting http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1031200
Disable selective VSS writers for troubleshooting
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=5962168
Using custom "pre-freeze" and "post-thaw" scripts.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1007696
Details VSS troubleshooting. This article also includes the services that need to be running on the GOS., Issues with quiescing.
When performing cloning on vSphere v5.x on a VM with snapshots
This is what's been observed: Base disk + snapshot will be copied over to the destination VM merging the snapshot(s) into a single VMDK at destination.
When you've run out of space on the datastore and snapshots cannot be deleted
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004545
This post details the steps to take with a command line tool provided you already have another datastore with sufficient space or have been able to increase the space on the same datastore that had run out of space.
There is a limit on how many open vmdk files an ESXi host can address depending on the VMFS version.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004424
This article is very interesting technically. Covers all versions of ESXi till date. There are changes to the HEAP size between version updates. Useful. Here's the table of limits reproduced:
I started out looking for supporting articles but ended up going in and out of KBs and losing track of what belongs to what, where belongs to where. Hence this post. It's mostly my notes of what I think will be useful and important while troughing through the maze of KB articles.
Start here (Understanding how Snapshots work on different versions of ESX/ESXi)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1015180
- Quiesce: If the
flag is 1 or true, and the virtual machine is powered on when the snapshot is taken, VMware Tools is used to quiesce the file system in the virtual machine. Quiescing a file system is a process of bringing the on-disk data of a physical or virtual computer into a state suitable for backups. This process might include such operations as flushing dirty buffers from the operating system's in-memory cache to disk, or other higher-level application-specific tasks.
Note: Quiescing indicates pausing or altering the state of running processes on a computer, particularly those that might modify information stored on disk during a backup, to guarantee a consistent and usable backup. Quiescing is not necessary for memory snapshots; it is used primarily for backups. - If the virtual disk is larger than 2TB in size, the redo log file is of
format.- -sesparse.vmdk .vmsd
The.vmsdfile is a database of the virtual machine's snapshot information and the primary source of information for the snapshot manager. The file contains line entries which define the relationships between snapshots as well as the child disks for each snapshot.TheSnapshot .vmsn .vmsnfile includes the current configuration and optionally the active state of the virtual machine.- The above files will be placed in the working directory by default in ESX/ESX 3.x and 4.x.
- In ESXi 5.x and later snapshots descriptor and delta VMDK files will be stored in the same location as the virtual disks (which can be in a different directory to the working directory).
- When removing a snapshot, the snapshot entity in the snapshot manager is removed before the changes are made to the child disks. The snapshot manager does not contain any snapshot entries while the virtual machine continues to run from the child disk.
- During a snapshot removal, if the child disks are large in size, the operation may take a long time. This can result in a timeout error message from either VirtualCenter or the VMware Infrastructure Client.
The child disk
The child disk, which is created with a snapshot, is a sparse disk. Sparse disks employ the copy-on-write (COW) mechanism, in which the virtual disk contains no data in places, until copied there by a write. This optimization saves storage space. The grain is the unit of measure in which the sparse disk uses the copy-on-write mechanism. Each grain is a block of sectors containing virtual disk data. The default size is 128 sectors or 64KB
The disk chain
Generally, when you create a snapshot for the first time, the first child disk is created from the parent disk. Successive snapshots generate new child disks from the last child disk on the chain. The relationship can change if you have multiple branches in the snapshot chain.
This diagram is an example of a snapshot chain. Each square represents a block of data or a grain as described above:

- Reverting virtual machines to a snapshot causes all settings configured in the guest operating system since that snapshot to be reverted. The configuration which is reverted includes, but is not limited to, previous IP addresses, DNS names, UUIDs, guest OS patch versions, etc.
http://blogs.vmware.com/vsphere/2011/09/storage-vmotion-storage-drs-virtual-machine-snapshots.html
"It should also be noted that if you do a Storage vMotion of a VM with snapshots and the VM has the workingDir parameter set, theworkingDir setting will be removed from the .vmx & the .vmsn snapshot data file will be moved to the home folder of the VM on the destination datastore. You do get a warning in the migration wizard about this"
"Therefore, if you use the snapshot.redoNotWithParent = "TRUE" parameter, you should refrain from doing Storage vMotion operations."
This happens regardless even if you set the parameters above - in other words, try as best as possible to avoid putting the snapshot files on a datastore away from the parent -flat file disks if all the datastores involved are backing an SDRS cluster...
Troubleshooting http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1031200
Disable selective VSS writers for troubleshooting
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=5962168
Using custom "pre-freeze" and "post-thaw" scripts.
Covers SYNC and LGTO_SYNC drivers, not VSS.
This article details why the VM may become unresponsive and seem "hung" during a snapshot process.http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1007696
Details VSS troubleshooting. This article also includes the services that need to be running on the GOS., Issues with quiescing.
When performing cloning on vSphere v5.x on a VM with snapshots
This is what's been observed: Base disk + snapshot will be copied over to the destination VM merging the snapshot(s) into a single VMDK at destination.
When you've run out of space on the datastore and snapshots cannot be deleted
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004545
This post details the steps to take with a command line tool provided you already have another datastore with sufficient space or have been able to increase the space on the same datastore that had run out of space.
There is a limit on how many open vmdk files an ESXi host can address depending on the VMFS version.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004424
This article is very interesting technically. Covers all versions of ESXi till date. There are changes to the HEAP size between version updates. Useful. Here's the table of limits reproduced:
| Version/build | Default heap amount | Default allowed open VMDK storage per host | Minimum heap amount | Maximum heap amount | Maximum heap value | Maximum open VMDK storage per host |
| ESXi/ESX 3.5/4.0 | 16 MB | 4 TB | N/A | N/A | N/A | N/A |
| ESXi/ESX 4.1 | 80 MB | 8 TB | N/A | 128 MB | 128 | 32 TB |
| ESXi 5.0 Update 2 (914586) and earlier | 80 MB | 8 TB | N/A | 256 MB | 255 | 25 TB |
| ESXi 5.0 Patch 5 (1024429) and later | 256 MB | 60 TB | 256 MB | 640 MB | 255 | 60 TB |
| ESXi 5.1 Patch 1 (914609) and earlier | 80 MB | 8 TB | N/A | 256 MB | 255 | 25 TB |
| ESXi 5.1 Update 1 (1065491) and later | 256 MB | 60 TB | 256 MB | 640 MB | 255 | 60 TB |
Disks (VMDK) larger than 2TB (for ESXi 5.5 with VMFS5 only. If using NFS, backend must be on file system that has large file support like EXT4. Extending disks beyond 2TB also requires the use of the Web Client or vCLI)
http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2058287
Changes in virtual machine snapshots for VMDKs larger than 2 TB:
- Snapshots taken on VMDKs larger than 2 TB are now in Space Efficient Virtual Disk (SESPARSE) format. No user interaction is required. The redo logs will be automatically created as SESPARSE instead of VMFSSPARSE (delta) when the base flat VMDK is larger than 2 TB.
- Extending a base flat disk on VMFSSPARSE or SESPARSE is not supported.
- The VMFSSPARSE format does not have the ability to support 2 TB or more.
- VMFSSPARSE and SESPARSE formats cannot co-exist in the same VMDK. In a virtual machine, both types of snapshot can co-exist, but not in the same disk chain. For example, when a snapshot is taken for a virtual machine with two virtual disks attached, one smaller than 2 TB and one larger than 2 TB, the smaller disk snapshot will be VMFSSPARSE the larger disk snapshot will be SESPARSE.
- Linked clones will be SESPARSE if the parent disk is larger than 2 TB.
What else can cause snapshots consolidation to fail?
Main reference article in spanish:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2046576
1. Locks (files are locked)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=10051
2. Temporary loss of communication between vCenter and ESXi hosts during confirmation - this does not mean that the ESXi hosts are shown to be disconnected from vCenter. To "restore" connectivity restart management agents from the host. (My note from field experience - there is a chance that during the restart of the management agents, your host may really get disconnected from vCenter AND if your cluster is EVC enabled, you will have to shutdown all the running VMs on that host in order for that host to rejoin the EVC cluster - so beware!)
3. A snapshot configuration file with extension .vmsd in the VM home directory may interfere. Rename, move or delete that file.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003490
Main reference article in spanish:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2046576
1. Locks (files are locked)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=10051
2. Temporary loss of communication between vCenter and ESXi hosts during confirmation - this does not mean that the ESXi hosts are shown to be disconnected from vCenter. To "restore" connectivity restart management agents from the host. (My note from field experience - there is a chance that during the restart of the management agents, your host may really get disconnected from vCenter AND if your cluster is EVC enabled, you will have to shutdown all the running VMs on that host in order for that host to rejoin the EVC cluster - so beware!)
3. A snapshot configuration file with extension .vmsd in the VM home directory may interfere. Rename, move or delete that file.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003490
Subscribe to:
Posts (Atom)
