ESXCLI – VMFS Storage Reclamation with UNMAP Command

Typically when you provision your datastores for VMware, you will thin provision them.  So let’s say you bring three VMFS5 datastores up to near capacity, but then you free up 30% on each datastore.  Do you get that space back on your SAN automatically?  If you are using VMFS6 datastores and have configured Automatic Space Reclamation then yes you would.  On the other hand if you are running VMFS5 datastores, you will need to perform this process manually.  That’s where the ESXCLI UNMAP command comes into play.  I am going to copy/paste some general information about this command from its VMware Knowledge Base article that is just good to know.  Also, use the same KB article as a reference.  Note that when it references “new”, it means new to vSphere 5.5.

New command in the esxcli namespace that allows deleted blocks to be reclaimed on thin provisioned LUNs that support the VAAI UNMAP primitive.

The command can be run without any maintenance window, and the reclaim mechanism has been enhanced as such:

  • Reclaim size can be specified in blocks instead of a percentage value to make it more intuitive to calculate.
  • Dead space is reclaimed in increments instead of all at once to avoid possible performance issues.

With the introduction of 62 TB VMDKs, UNMAP can now handle much larger dead space areas. However, UNMAP operations are still manual. This means Storage vMotion or Snapshot Consolidation tasks on VMFS do not automatically reclaim space on the array LUN.

Now let’s get into actually doing this thing.  Note that I am writing this under the assumption that you have the datastore you are working with mapped on all of your ESX hosts.  If you have a particular datastore mapped to a particular host, you will need to perform these steps on said host.

1.  Start by opening an SSH session to one of your ESX hosts

2.  Run the following command to get information about all of the mapped datastores for this host

3.  Now you have a choice.  You can run the unmap command with either the UUID or the Volume Label.  I personally use the Volume Label because it’s easier, let’s be real.  Record your volume label and run the following command, where Datastore00 is your volume label:

If you would like to use the UUID, run the following where 509a9f1f-4ffb6678-f1db-001ec9ab780e is your datastores UUID:


That’s it!  It’s really that simple.  It will take some time, but you should be able to refresh your SAN periodically to see the available disk space going up.  If you would like to watch the log as it completes the unmap operations, do the following:

1.  Open a second SSH session to the same host you are running the unmap command on

2.  Run the following to change directories (not required, but makes life easier):

3.  Run the following to tail the log:

The unmap operations will be clearly indicated with UNMAP in the log line.

Installing VMware Tools on Ubuntu Server via CLI

Per usual this is more for my own reference because I’m forgetful, but it might help someone out if they happen to miss the very detailed instructions found here on VMware’s KB site.  For this I think I’ll do a numbered list of steps to make things simple:

1.) Mount VMware Tools installer in vSphere/vCenter by selecting the Install VMware Tools option

2.) SSH into your Linux VM and sudo up, or if you prefer put sudo in front of all your commands…I’m lazy

3.) Create a directory to use as a mount point

4.) Mount the CDROM to your newly created mount point

5.) Copy the zipped files to a temporary directory

6.) Move into your temp directory and extract the files

7.) Move into the vmware-tools-distrib directory and run the PERL install script

At this point you’ll be asked a whole bunch of questions to customize the install.  Select whatever works for your environment, though the defaults will work fine for most.  It’ll go through installing and you’re done!

Automated Windows Updates with PowerShell and PowerCLI

Holy cow it’s been a while!  Real life must be catching up with me or something, I am not sure.  Anywho, I’ve had this issue where I have never felt comfortable automating Windows Updates on my servers in case the worst should happen.  Seems like a reasonable concern right?  Well I finally got over it, did some digging, and found a suitable solution that I am comfortable with.

It really is quite simple.  I hijacked some PowerShell modules from Technet Script Center (found here) and then did some work with the PowerCLI module from VMware to take a snapshot prior to the updates.  Once it is done taking a snapshot of a VM (I pull them from an array) it will begin updates, and restart as needed.  I do not delete the snapshots in an automated fashion at this time, I just check everything is still good when I get in the office in the morning and then remove them all with PowerCLI (assuming there is no issues) as seen below:

Since I said I run this while I’m sleeping, I obviously just have it set up with Task Scheduler running with a service account I created with access to all the necessary servers and my vCenter environment.  Without further ado, I give you what I have coined “UpdateAutoPilot” below:



Upgrade VMware ESXi 6.0 Update 3 to 6.5 Update 1 with Update Manager

Update 1 for VMware ESXi has been out for a couple months now with no major issues, so we decided it was finally time to make the jump from ESXi 6.0 Update 3 to ESXi 6.5 Update 1.  I accomplished this using Update Manager in vCenter as described in the following steps.

1.)  Navigate to the first host you wish to upgrade and select the Update Manager tab.  Here confirm that you are compliant with the Critical Host Patches (Predefined) baseline.  If you’re not, I would suggest getting all of your patches installed prior to upgrade.  Probably not necessary, but it gives me that warm and fuzzy feeling.

2.)  Once you have that under control we will need to upload the actual ESXi ISO for the update.  We can’t do this where we are currently, so select the Go to Admin View on the top right.

3.)  Select the Manage tab, then select ESXi Images, and finally click Import ESXi Image…

4.)  Once you have your ESXi Image uploaded, select the Host Baselines tab and select +New Baseline…

5.)  Give the baseline a name and description, select the Host Upgrade radio button, and select Next.

6.)  Select the ESXi image that you just uploaded and select Next.

7.)  You should see a little summary page.  Confirm your settings and select Finish.

8.)  Now that we have our baseline, we need to attach that baseline to the host we wish to upgrade.  Navigate out of the Admin View back to the Update Manager tab of the host you are upgrading and select the Attach Baseline… in the top right corner.

9.)  Select the upgrade baseline you just created and select OK.

10.)  Your host will now show that it is Non-Compliant because it does not meet the host upgrade baseline we just attached.  Select the Remediate… option in the top right to begin the upgrade process.

11.)  Select the baseline you created and select Next.

12.)  Select the host you wish to “remediate” (aka the host you want to upgrade) and select Next.

13.)  Accept the EULA and select Next.

14.)  On the Advanced Options menu you can choose to schedule the upgrade if you like.  When ready select Next.

15.)  On the Host remediation options page you have options for PXE booting, maintenance mode on failure settings, VM Power State, and so on.  Select what makes sense for you and select Next.

16.)  On the Cluster remediation options you have a number of options available to disable.  I disable FT and HA admission control, but choose what suites your environment and select Next.

17.)  Review the summary page and select Finish to get things started.


At this point, go grab some coffee and relax.  If you iLO or something similar you can watch your host reboot a couple times.  For my hardware this entire process took roughly 10 to 15 minutes.  Once you are back up, take your machine out of maintenance mode and you’re good to go!

vSphere HA Agent Has an Error: vSphere HA Agent Cannot Be Installed or Configured

Had a weird issue the other day with one of my ESXi hosts in my cluster.  I was actually spinning up a new VM, and after I finished installing VMware Tools and all the usual nonsense I saw this error on the VM in VMware:

Virtual machine failed to become vSphere HA Protected and HA may not attempt to restart it after a failure.

I checked the ESXi host that was running the VM and saw the following error:

vSphere HA agent for host <Hostname> has an error in <Cluster Name> in <Datacenter Name>: vSphere HA agent cannot be installed or configured.

I did some searching on the internet and found some people suggesting to restart the management agents on the host and select the “Reconfigure for vSphere HA” option in the right click menu.  I tried both of those to no success.  I then vMotioned all the VM’s off of the host and restarted it.  Still no dice.  I vMotioned everything back, ripped our HA config off of the cluster, and reconfigured it with a few quality of life changes.  That did the trick!  Probably just a fluke, but it was weird one to say the least.

Unable to Install Hyper-V on VMware VM

Setting up a test environment recently I ran into a bit of an issue installing the Hyper-V role on a Server 2012R2 VM that I have running in my home VMware test environment.  When I would attempt it, I received the following error:

After doing a little Googling I came across this article.  It explains the long way of doing things, but in the comments was a much quicker and easier way which I tested and proved to work.

Start by shutting down your VM and selecting the Edit Settings option.  When the window appears select the VM Options near the top, expand the Advanced section, and select the “Edit Configuration…” option next to Configuration Parameters.

Another windows will pop up and you will need to select the “Add parameter” option on the top left.  Add the following two parameters:

Key = hvh.enable       Value = TRUE

Key = hypervisor.cpuid.v0        Value = FALSE

Save all of your changes, start your VM back up, and install the Hyper-V role.