Showing posts with label VM. Show all posts
Showing posts with label VM. Show all posts

Tuesday, February 17, 2015

Patching CentOS 6.5 on VMware

Just a quick and dirty post for my future reference.

Sometimes, the OS gets confused. Especially if there are additional lines for VMXNET.
When you run "system-config-network", eth0 should show the VMware NIC type, for example "VMXNET3"

Otherwise;
1. Remove the unnecessary lines from /etc/udev/rules.d/70-persistent-net.rules
2. Make sure the MAC address matches matches the ESXi assigned
3. Restart the services "service network restart"
4. "yum clean all" (in case cache is pointing to dead update locations)
4. yum update

Location of network configuration file: (assumption for 1st network adapter)
/etc/sysconfig/network-scripts/ifcfg-eth0

Wednesday, September 17, 2014

Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats

Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!

LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.

Original Post:-

While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.

My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.

However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)

The details of why "don't do that" follow:

This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.

(Don't know if this hotlink to the post on VMUG  ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )





Here's my 2 cents.  I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.

Appends:-

Roshan Jha: Posted a recent VMware blog article (which I did not see earlier). 
It's VSAN related but relevant.

Which vSCSI controller should I choose for performance?  - Mark Achtemichuk

Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "

Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:

"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"

"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."

Note that the article has made a distinction between a "device" and the "adapter".

Friday, June 20, 2014

Things to look out for when using VMware PVSCSI

Well, since VMXNET3 is optimum, why not PVSCSI?

Rolling out to a production environment we have to make sure we know the possible caveats and limitations so that the stakeholders can be informed and operations have the correct information for deployments.

Following is a summary of things to look out for based on URL here:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398


  • The VMware PVSCSI adapter driver is also compatible with the Windows Storport storage driver
  • PVSCSI adapters are not suited for DAS environments.
  • Cannot be used as a boot disk for Red Hat Enterprise Linux (RHEL) 5 (32 and 64 bit) and all update releases
  • Hot-adding a PVSCSI adapter is only supported for those versions that support booting from a PVSCSI adapter.
  • Hot add or hot remove requires a bus rescan from within the guest.
  • Disks with snapshots might not experience performance gains when used on Paravirtual SCSI adapters if memory on the ESX host is over committed.
  • Do not use PVSCSI on a virtual machine running Windows with spanned volumes. Data may become inaccessible to the guest operating system.
  • If you upgrade from RHEL 5 to an unsupported kernel, you might not be able to access data on the virtual machine's PVSCSI disks. You can run vmware-config-tools.pl with the kernel-version parameter to regain access.
  • If a virtual machine uses PVSCSI, it cannot be part of a Microsoft Cluster Server (MSCS) cluster.
I remember seeing somewhere some other considerations for View deployments and will update this post once there is more information.

Have a great day ahead!

Monday, January 24, 2011

VMware: Not using Fault-tolerance? Turn off FT to enable Cluster Compliance

http://kb.vmware.com/kb/1017714


To disable Fault Tolerance compliance checks:
  1. Right-click the cluster and click Edit Settings > VMware HA > Advanced Options
  2. Enter das.includeFTcomplianceChecks in a blank field, and give it a value of false.

    When this setting is applied, Fault Tolerance Compliance Checks are removed from the description under theProfile Compliance Tab for the cluster and is no longer a role during a Cluster Compliance check.
Note: To re-enable the checks, remove the das.includeFTcomplianceChecks option.