Apr 292011
 

I normally don’t post these.  But there are a few important symptoms that I personally have seen that look to be resolved by this patch.

2 new patches released Yesterday.  You can find the details for the ESXi patch in this KB Article and in this KB Article.  There are two very important security fixes in the patches but they also have some problem symptoms fixed as well.  Here’s some of the symptoms that are fixed (as per the KB articles at the time of this writing):

  • If you configure the port group policies of NIC teaming for parameters such as load balancing, network failover detection, notify switches, or failback, and then restart the ESXi host, the ESXi host might send traffic only through one physical NIC.
  • Virtual machines configured with CPU limits might experience a drop in performance when the CPU limit is reached (%MLMTD greater than 0). For more information, see KB 1030955.
    Continue reading »
Apr 292011
 

Got this lovely message yesterday when installing a new vCenter 4.1U1 for a customer.  Never seen this one before.  Until I found this thread. I was very suprised to see the resolution.  Here’s a tip for all of you:  Don’t name your vCenter Service account the exact same name as your vCenter Server.

When we renamed the service account to have a different name from the vCenter server itself, all installed fine.  I had to post it, this issue really surprised me.

Apr 182011
 

Hopefully this issue will get fixed soon.  I was working on a new deployment for a customer when I noticed this issue.  The issue is this:  I created a few windows VMs for this customer to use as templates.  I created them as thick, figuring I would patch, defrag, shrink, then convert to thin.  When I did, I noticed that my disks went from 50GB to 49.9GB, huh?  Looking inside the VM, the C drive was using 8GB of space.  Where was this 49.9GB coming from?  After a bit of searching on the net, I came across this thread in the communities. I looked thru the article and sure enough the listed fix works.  I had a extra LUN for this customer that had not been used yet.  I deleted the datastore and created a new one with a different block size as the others.  I tried the svmotion with converting to thin again and this time, Bingo, 8GB as expected.  I was then able to svmotion back to my original LUN telling the svmotion to keep the same type.  Worked great and stayed 8GB.

Duncan wrote an explanation in the community thread for what is going on:

“It most definitely has to do with the type of datamover used. When a different blocksize is used for the destination the legacy datamover is used which is the FSDM. When the blocksize is equal the new datamover is used which is FS3DM. FS3DM decides if is will use VAAI or just the software component, in either case unfortunately the zeroes will not be gobbled. I have validated it and reported it to engineering that this is desireable. The team will look into it but unfortunately I cannot make any promises if or when this feature would be added.”

It sounds to me like if your storage supports VAAI and ESXi offloads the svmotion to it, there’s no hope of shrinking because VAAI is not that intelligent.  When you svmotion between two different LUN block sizes (or between iSCSI, NFS or FC) it uses old-school svmotion (probably because the VMFS block layout changes) which will actually convert the VM to thin when it moves.

Hopefully VMware engineering will change svmotion to automatically use old-school svmotion when you select to change from thick or thin and only use VAAI when you choose to keep the same format during the move.  We’ll have to wait and see….