It's been a REALLY long time since I've blogged anything.
Anyhow, I've been with my current company for almost five years and have been using the same notebook running on Windows 7 for all these years.
Yes, they have given me a new notebook after 3 years, but I've stuck with the current one for reasons which shall not be covered in this article. :P
Basically, I ran out of space on my C: drive. I had only 10.5GB left out of 108GB.
Here's how a quick and dirty note on how to clear space besides using "Disk Cleanup".
DISCLAIMER:
Make sure you have a Systems Image backup in case you FUBAR your machine.
This "note" here is meant for my own future reference.
If you destroy, break, damage, get yourself into hot soup in office because of this, it's your decision and choice. I cannot be held responsible for your choice of actions.
Now that that's been said...
1. Run "PatchCleaner" and MOVE the orphaned/old patch folders to another location. This program can be gotten from "http://download.cnet.com/PatchCleaner/3000-18512_4-76399133.html" - this effectively managed to clear about 11GB of space.
2. Move your Windows Search Index off to another drive. How to do that?
- Open a administrative command prompt
- Run this command line "rundll32.exe shell32.dll,Control_RunDLL srchadmin.dll"
- Click on "Advanced", on next screen, click on "Select New" and choose the folder you want to move the index files to.
- Click "OK" and wait.
Once control returns to the application, you will have more free space on your C: drive. I got back another 11.5GB of space on my C: drive.
Hope the above steps helps. And as usual, YMMV.
Just happy sharing nuggets. My Personal Wiki. Blog contains mostly technical stuff which may be of interest to some but mostly useful for me.
Showing posts with label disk. Show all posts
Showing posts with label disk. Show all posts
Friday, October 14, 2016
Wednesday, September 17, 2014
Advantages of using VMware PVSCSI interface vs LSI SAS and it's caveats
Updated (again) 1330hrs:
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!
LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.
Original Post:-
While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.
My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.
However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)
The details of why "don't do that" follow:
This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.
(Don't know if this hotlink to the post on VMUG ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )
Here's my 2 cents. I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.
Appends:-
Roshan Jha: Posted a recent VMware blog article (which I did not see earlier).
It's VSAN related but relevant.
Which vSCSI controller should I choose for performance? - Mark Achtemichuk
Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "
Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:
"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"
"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."
Note that the article has made a distinction between a "device" and the "adapter".
Appended some other interesting information from the discussion resulting from that Facebook post.
Thanks guys!
LSI SAS by defaults supports only queue depth of 25. (needs further confirmation) vs PVSCSI.
Original Post:-
While there are host OS (HOS) and guest OS (GOS) optimizations that will increase performance, there are caveats to note.
My recommendation would be to follow VMwares' best practice (gleaned from various forum posts and blogs - not sure if there are any such official articles/KBs) and do not configure your OS disk/partition with PVSCSI especially in a production environment where you may have a few other VMware administrators.
However, for a controlled test environment like home labs, by all means try it. All my home lab VMs are running PVSCSI on OS disks too. ;)
The details of why "don't do that" follow:
This is a reply to a post on Facebook's VMUG ASEAN to a question on how to configure PVSCSI replacement interface.
(Don't know if this hotlink to the post on VMUG ASEAN will work. If anyone knows a sure-fire way to link Facebook posts let me know in the comments below :D )
Here's my 2 cents. I did some deep dive research on PVSCSI and there are caveats. Some OS may have issues with it. Particularly VMware View. For PVSCSI to work, VMtools has to be installed and functional. There may be some situations where when you update or lose the VMtools you might lose connectivity to the disks connected using the PVSCSI device. I had considered using PVSCSI as the OS boot interface (after switching the vNIC using the article Lalit Sharma mentioned. However, if you get into a situation where you need to boot the OS (Windows in this case, Linux I don't have enough experience) to repair the OS, you will have to reconfigure the interface back to LSI or the default Windows boot media won't be able to access the OS disk. So take these things into consideration. Anyhow for my home lab, everything is on PVSCSI. Just it may not be wise in production environment especially if you have other vSphere admins that may not be as familiar.
Appends:-
Roshan Jha: Posted a recent VMware blog article (which I did not see earlier).
It's VSAN related but relevant.
Which vSCSI controller should I choose for performance? - Mark Achtemichuk
Kasim Hansia: "LSI only supports 32 queue depth and PVSCSI queue depth default values are 64 (device) and 254 (adapter). You can increase PVSCSI queue depths to 256 (device) and 1024 (adapter) inside a Windows or Linux Virtual Machine. "
Tan Wee Kiong - thanks for the correction of the initial assumption and the following KB article:
"Large-scale workloads with intensive I/O patterns might require queue depths significantly greater than Paravirtual SCSI default values (2053145)"
"The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 (for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine or Linux Virtual Machine."
Note that the article has made a distinction between a "device" and the "adapter".
Labels:
adapter queue depth,
caveats,
device queue depth,
disk,
how-to,
howto,
interfaces,
LSI SAS,
optimzations,
PVSCSI,
queue depth,
storage,
tuning,
VM,
VMware,
VMware Tools,
VSAN,
vSphere
Subscribe to:
Posts (Atom)


