This content has been marked as final. Show 2 replies
You did reboot after changing /etc/system?
And if there is going to be significant memory pressure on this server, and your file IO patterns don't require performance-critical multiple reads from the same file(s), you'll do much better to limit the ZFS ARC to something like 1 GB. Or smaller.
Yes, the kernel will release the ZFS ARC if something else needs the memory.
But the release will be slow. Much, much slower than getting truly free memory.
Thanks for the info - the host was rebooted, and I've seen the impact of slow ZFS memory release first hand, in earlier Solaris 10 releases.
I'm not really concerned about my ARC cache size - I believe it's being enforced correctly.
The problem I have is that I cannot explain what the additional 60GB (the difference between "ZFS File Data" at 160GB and the ARC cache at 100GB) is used for.
That's too much memory to dismiss lightly. I'm sure the kernel is putting this 60GB to good use, it's just that I have no idea what it's for and I should be able to explain this.
Does anyone know?