I use ZFS in FreeNAS 8.0.4 and use iSCSI to provide a LUN to Vmware ESXi 5.0.
I post it at http://forums.freenas.org/showthread.php?10990-Memory-utilization-and-performance-problem/page4, but I believe people here may have more expertise on ZFS.
FreeNAS doesn't have iSCSI unmap. I have create / move / delete virtual machine between ZFS iSCSI. After a while, zpool list report 95% capacity used. However, Vmware shows VMFS only uses 50% capacity.
AFAIK, when creating a virtual machine, zvol allocated disk space. However, when we moved / deleted virtual machine, zvol didn't update / didn't aware to update the free capacity.
[root@data1] ~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data1 2.25T 2.07T 185G 95% ONLINE /mnt
Now, I believe due to the wrong free capacity reported in zpool list, the performance is deeply affected.
How I should do to let ZFS know the real free capacity.
Unfortunately there is no easy solution. When you have a filesystem sitting ontop of the ZFS volume (i'm assuming you created a sparse volume), that filesystem may be placing blocks of data all over. Even if its not using the space anymore there was no way for the zfs volume to know that.
Only way I found to alleviate it, is to use utils on whatever filesystem is ontop of the zfs volume to zero out unused space.. Usually a defrag first if (ntfs or fat32) and then zero out the free space. If you are running linux in the VM, then zerofree is a nice utility.
One other thing I noticed. Is when zfs sending a volume to another server, it seems to deflate a bit. I am not sure why, probably because once the sparse volume has grown and claimed the blocks, it will not let go of them, but when zfs sending it may only be sending used blocks.
Also consider turning on compression, even light like lzjb. No reason to not run with it on, it does wonders when you have VM OSs using volumes. My volumes using gzip-5 for Windows 7 VMs typically run at 2.00x compression and Linux VMs run at 2.5x compression.