The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
Hopefully, I got all of this right.
3) deleting files which are backed up by a snapshot
isn't always working because some snapshots could hold its data.
So deleting a file in such situation could end up with an error (EDQUOT).
Related CR exists:
CR 6976827 Unable to remove files once zfs filesystem is totally filled
but the fix for that CR doesn't cover such cases.
So you should delete snapshots instead of files in order to reduce their retention.
Supposing 100% of your 80% fulfilled dataset capacity
were rewritten between two snapshots, you will catch that issue.
1. Using zfs refquota instead of zfs quota could serve as a workaround of ZFS space issues
(and so metadata for snapshot usage won't be limited by dataset size and won't affect free dataset space).
2. Reduce snapshot retention for your dataset.