Apr 182011
 

Hopefully this issue will get fixed soon.  I was working on a new deployment for a customer when I noticed this issue.  The issue is this:  I created a few windows VMs for this customer to use as templates.  I created them as thick, figuring I would patch, defrag, shrink, then convert to thin.  When I did, I noticed that my disks went from 50GB to 49.9GB, huh?  Looking inside the VM, the C drive was using 8GB of space.  Where was this 49.9GB coming from?  After a bit of searching on the net, I came across this thread in the communities. I looked thru the article and sure enough the listed fix works.  I had a extra LUN for this customer that had not been used yet.  I deleted the datastore and created a new one with a different block size as the others.  I tried the svmotion with converting to thin again and this time, Bingo, 8GB as expected.  I was then able to svmotion back to my original LUN telling the svmotion to keep the same type.  Worked great and stayed 8GB.

Duncan wrote an explanation in the community thread for what is going on:

“It most definitely has to do with the type of datamover used. When a different blocksize is used for the destination the legacy datamover is used which is the FSDM. When the blocksize is equal the new datamover is used which is FS3DM. FS3DM decides if is will use VAAI or just the software component, in either case unfortunately the zeroes will not be gobbled. I have validated it and reported it to engineering that this is desireable. The team will look into it but unfortunately I cannot make any promises if or when this feature would be added.”

It sounds to me like if your storage supports VAAI and ESXi offloads the svmotion to it, there’s no hope of shrinking because VAAI is not that intelligent.  When you svmotion between two different LUN block sizes (or between iSCSI, NFS or FC) it uses old-school svmotion (probably because the VMFS block layout changes) which will actually convert the VM to thin when it moves.

Hopefully VMware engineering will change svmotion to automatically use old-school svmotion when you select to change from thick or thin and only use VAAI when you choose to keep the same format during the move.  We’ll have to wait and see….

  2 Responses to “Bug: ESXi 4.1U1 does not svmotion from thick to thin as expected”

  1. it is NOT a bug. it is working as designed.

    By the way, already wrote an article about:
    http://www.yellow-bricks.com/2011/02/18/blocksize-impact/

    The difference is that with the old datamover SvMotion would read the block, add it to a buffer, write the block.

  2. How is it not a bug when the same identical activity has 2 different outcomes depending on if the datastores have the same block size or not? The technical reason that this occurs is not justification of the failure of the software to perform the expected action.

    If an administrator runs thru all of the tasks to shrink the disk and performs a svmotion to convert to thin. They expect the vm to become thin. When it does not it is a bug (or a failure of the software). Whichever way you want to look at it.

Leave a Reply