Showing posts with label vSphere. Show all posts
Showing posts with label vSphere. Show all posts

Wednesday, September 17, 2014

Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats

Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!

LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.

Original Post:-

While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.

My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.

However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)

The details of why "don't do that" follow:

This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.

(Don't know if this hotlink to the post on VMUG  ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )





Here's my 2 cents.  I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.

Appends:-

Roshan Jha: Posted a recent VMware blog article (which I did not see earlier). 
It's VSAN related but relevant.

Which vSCSI controller should I choose for performance?  - Mark Achtemichuk

Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "

Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:

"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"

"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."

Note that the article has made a distinction between a "device" and the "adapter".

Tuesday, June 24, 2014

VMware vSphere Snapshots (draft-WIP)

This post aims to condense and place into a single page important information with regards to snapshots, svmotion (snapshots are used), cloning (snapshots used there too!) and some general issues  and questions which I've encountered in my working environment. (quiescing errors, during Avamar backup, during cloning of "hardened" windows GOS)

I started out looking for supporting articles but ended up going in and out of KBs and losing track of what belongs to what, where belongs to where. Hence this post. It's mostly my notes of what I think will be useful and important while troughing through the maze of KB articles.

Start here (Understanding how Snapshots work on different versions of ESX/ESXi)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1015180
  • Quiesce: If the  flag is 1 or true, and the virtual machine is powered on when the snapshot is taken, VMware Tools is used to quiesce the file system in the virtual machine. Quiescing a file system is a process of bringing the on-disk data of a physical or virtual computer into a state suitable for backups. This process might include such operations as flushing dirty buffers from the operating system's in-memory cache to disk, or other higher-level application-specific tasks.

    Note: Quiescing indicates pausing or altering the state of running processes on a computer, particularly those that might modify information stored on disk during a backup, to guarantee a consistent and usable backup. Quiescing is not necessary for memory snapshots; it is used primarily for backups.
  • If the virtual disk is larger than 2TB in size, the redo log file is of  --sesparse.vmdk format.
  • .vmsd
    The .vmsd file is a database of the virtual machine's snapshot information and the primary source of information for the snapshot manager. The file contains line entries which define the relationships between snapshots as well as the child disks for each snapshot.
  • Snapshot.vmsnThe .vmsn file includes the current configuration and optionally the active state of the virtual machine.
  • The above files will be placed in the working directory by default in ESX/ESX 3.x and 4.x.
  • In ESXi 5.x and later snapshots descriptor and delta VMDK files will be stored in the same location as the virtual disks (which can be in a different directory to the working directory). 
  • When removing a snapshot, the snapshot entity in the snapshot manager is removed before the changes are made to the child disks. The snapshot manager does not contain any snapshot entries while the virtual machine continues to run from the child disk. 
  •  During a snapshot removal, if the child disks are large in size, the operation may take a long time. This can result in a timeout error message from either VirtualCenter or the VMware Infrastructure Client.

The child disk

The child disk, which is created with a snapshot, is a sparse disk. Sparse disks employ the copy-on-write (COW) mechanism, in which the virtual disk contains no data in places, until copied there by a write. This optimization saves storage space. The grain is the unit of measure in which the sparse disk uses the copy-on-write mechanism. Each grain is a block of sectors containing virtual disk data. The default size is 128 sectors or 64KB


The disk chain

Generally, when you create a snapshot for the first time, the first child disk is created from the parent disk. Successive snapshots generate new child disks from the last child disk on the chain. The relationship can change if you have multiple branches in the snapshot chain.
This diagram is an example of a snapshot chain. Each square represents a block of data or a grain as described above:


  • Reverting virtual machines to a snapshot causes all settings configured in the guest operating system since that snapshot to be reverted. The configuration which is reverted includes, but is not limited to, previous IP addresses, DNS names, UUIDs, guest OS patch versions, etc.

When performing Storage vMotion
http://blogs.vmware.com/vsphere/2011/09/storage-vmotion-storage-drs-virtual-machine-snapshots.html
"It should also be noted that if you do a Storage vMotion of a VM with snapshots and the VM has the workingDir parameter set, theworkingDir setting will be removed from the .vmx & the .vmsn snapshot data file will be moved to the home folder of the VM on the destination datastore. You do get a warning in the migration wizard about this"

"Therefore, if you use the snapshot.redoNotWithParent = "TRUE" parameter, you should refrain from doing Storage vMotion operations."

This happens regardless even if you set the parameters above - in other words, try as best as possible to avoid putting the snapshot files on a datastore away from the parent -flat file disks if all the datastores involved are backing an SDRS cluster...

Troubleshooting http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1031200
Disable selective VSS writers for troubleshooting

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=5962168
Using custom "pre-freeze" and "post-thaw" scripts.
Covers SYNC and LGTO_SYNC drivers, not VSS.
This article details why the VM may become unresponsive and seem "hung" during a snapshot process.

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1007696
Details VSS  troubleshooting. This article also includes the services that need to be running on the GOS., Issues with quiescing.

When performing cloning on vSphere v5.x on a VM with snapshots
This is what's been observed: Base disk + snapshot will be copied over to the destination VM merging the snapshot(s) into a single VMDK at destination.

When you've run out of space on the datastore and snapshots cannot be deleted
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004545
This post details the steps to take with a command line tool provided you already have another datastore with sufficient space or have been able to increase the space on the same datastore that had run out of space.

There is a limit on how many open vmdk files an ESXi host can address depending on the VMFS version. 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004424
This article is very interesting technically. Covers all versions of ESXi till date. There are changes to the HEAP size between version updates. Useful. Here's the table of limits reproduced:
Version/buildDefault heap amountDefault allowed open VMDK storage per hostMinimum heap amountMaximum heap amountMaximum heap valueMaximum open VMDK storage per host
ESXi/ESX 3.5/4.016 MB4 TBN/AN/AN/AN/A
ESXi/ESX 4.180 MB8 TBN/A128 MB12832 TB
ESXi 5.0 Update 2 (914586) and earlier80 MB8 TBN/A256 MB25525 TB
ESXi 5.0 Patch 5 (1024429) and later256 MB60 TB256 MB640 MB25560 TB
ESXi 5.1 Patch 1 (914609) and earlier80 MB8 TBN/A256 MB25525 TB
ESXi 5.1 Update 1 (1065491) and later256 MB60 TB256 MB640 MB25560 TB

Disks (VMDK) larger than 2TB (for ESXi 5.5 with VMFS5 only. If using NFS, backend must be on file system that has large file support like EXT4. Extending disks beyond 2TB also requires the use of the Web Client or vCLI)
http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2058287
Changes in virtual machine snapshots for VMDKs larger than 2 TB:
  • Snapshots taken on VMDKs larger than 2 TB are now in Space Efficient Virtual Disk (SESPARSE) format. No user interaction is required. The redo logs will be automatically created as SESPARSE instead of VMFSSPARSE (delta) when the base flat VMDK is larger than 2 TB.
  • Extending a base flat disk on VMFSSPARSE or SESPARSE is not supported.
  • The VMFSSPARSE format does not have the ability to support 2 TB or more.
  • VMFSSPARSE and SESPARSE formats cannot co-exist in the same VMDK. In a virtual machine, both types of snapshot can co-exist, but not in the same disk chain. For example, when a snapshot is taken for a virtual machine with two virtual disks attached, one smaller than 2 TB and one larger than 2 TB, the smaller disk snapshot will be VMFSSPARSE the larger disk snapshot will be SESPARSE.
  • Linked clones will be SESPARSE if the parent disk is larger than 2 TB.
What else can cause snapshots consolidation to fail?
Main reference article in spanish:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2046576
1. Locks (files are locked)
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=10051
2. Temporary loss of communication between vCenter and ESXi hosts during confirmation - this does not mean that the ESXi hosts are shown to be disconnected from vCenter. To "restore" connectivity restart management agents from the host. (My note from field experience - there is a chance that during the restart of the management agents, your host may really get disconnected from vCenter AND if your cluster is EVC enabled, you will have to shutdown all the running VMs on that host in order for that host to rejoin the EVC cluster - so beware!)
3. A snapshot configuration file with extension .vmsd in the VM home directory may interfere. Rename, move or delete that file.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003490



Tuesday, April 22, 2014

Heartbleed remediation for vCenter (build 1750787), ESXi (build 1746018), Web Client Integration plug-in (build 1750778), vSphere C# client (build 1746248)

Glad to report the vCenter update went without a hitch on my home lab. As aways YMMV.

Updating to vCenter 5.5.0u1a - install in sequence following custom install. No reboot required. All other components remain the same as 5.5.0u1
Versions of updated 5.5.0u1a vCenter SSO, Inventory Service, Web Client and vCenter Server.



VMware Update Manager will be restarted during installation.
Web Client Integration Plugin will still have the same name as 5.5.0u1 but the build/version has been updated
vSphere Client updated to build 1746248. Not sure if it's only my home NAS that's slow but it looked like before updating, the stats and info page for ESXi hosts would not display properly.
vSphere Client not displaying ESXi stats properly (before updating; could also be caused by my storage backend)

Thursday, July 26, 2012

HOWTO Fix vCenter 4 search not working

First, reset Web Service in vCenter.

Then if it still doesn't work, on the vSphere clien (not verified nor tested)t:

1. Click Plug-in -> manage Plug-ins
2. Right click Hardware Status plugin and select Disable
3. Close and re-open vSphere client.
4. Click Plug-in -> manage Plug-ins
5. Right click Hardware Status plugin and select Enable

(Solution from one of my colleagues. Am not sure if this step is correct.. What does "Hardware Status" plugin have to do with search?)

If steps for client are wrong, welcome corrections.

Friday, March 23, 2012

VMware/vSphere - CPU READY and CPU USAGE put simply


I was asked this question by my colleagues and after answering it with the official VMware explanation, they still didn't quite get it. (Yeah, actually if I look at it without the necessary background info, I'd probably not get it either...)

The following visualization helped put it simply:

What's the difference between CPU READY and CPU USAGE
CPU USAGE and CPU READY - What is it?




CPU Ready = % of time there is work to be done for VMs, but no physical CPU available to do it on (all host CPUs are busy serving other VMs). One rule of thumb that I heard is that below 5% Ready is normal; anything between 5% and 10%, best keep an eye on the VM and the host. Over 10% (for extended periods) you should be planning on taking some action.
-           
-          CPU Usage = raw, absolute amount of CPU used by corresponding VM at the given moment.

References:
The amount of time a virtual machine waits in the queue in a ready-to-run state before it can be scheduled on a CPU is known as ready time.
The higher the ready time is, the slower the virtual machine is performing. The ready time should preferably be as low as possible. Virtual machines that are allocated multiple cpus or have high timer interrupts are more frequently seen with high ready time values. 


Monday, January 24, 2011

VMware: Not using Fault-tolerance? Turn off FT to enable Cluster Compliance

http://kb.vmware.com/kb/1017714


To disable Fault Tolerance compliance checks:
  1. Right-click the cluster and click Edit Settings > VMware HA > Advanced Options
  2. Enter das.includeFTcomplianceChecks in a blank field, and give it a value of false.

    When this setting is applied, Fault Tolerance Compliance Checks are removed from the description under theProfile Compliance Tab for the cluster and is no longer a role during a Cluster Compliance check.
Note: To re-enable the checks, remove the das.includeFTcomplianceChecks option.


Wednesday, January 19, 2011

To remove unused plug-in from vCenter use Managed Object Reference

Found at http://vcenterservername/mob

Logon with vSphere credentials;


  1. Click on content, then
  2. Extension manager
  3. Find the plug-in which needs to be removed; for example, look for extensionList["VirtualCenter"], the parameter you need is just VirtualCenter
  4. Click UnregisterExtension, in the VALUE field, enter the name of the plug-in you wish (in this example it's VirtualCenter
  5. Click on Invoke Method (to remove plug-in