I have a question regarding the interaction between the ZFS ARC cache memory consumption and application memory interactions running on a server. I am running into a situation where I believe the primary enterprise application using an Oracle DB running on the server is not able to access any additional RAM when it requires more memory. I have read ZFS documentation and it is supposed to release ram for applications, but I am unsure if this is happening. How can I test this? I am running a T5120 Solaris 10, 32 GB of ram.
The primary application documentation says to size the server for 16 GB, it is usually running with 6 GB on a day to day basis. That is 17% of the memory. ZFS ARC cache is always around the 69-70% of the memory (23GB). I need to figure out if I am not seeing additional memory utilisation on the part of the application because it is poorly written and the ARC is not being released when requested. That is my running theory.
Also, polling the server for data, I found a high rate of page faults/second. The server seams to be running from 350 to 650 PageFaults/second and can spike to 1.8k during periodes of high load.
Is lowering the maximum value of the ARC the only option? This is suggested when using a DB. I would appreciate any suggestions. This is my first run in with ZFS.
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 319422 2495 8%
ZFS File Data 2890547 22582 70%
Anon 703625 5497 17%
Exec and libs 28374 221 1%
Page cache 35260 275 1%
Free (cachelist) 9518 74 0%
Free (freelist) 114959 898 3%
Total 4101705 32044
Physical 4070897 31803
From my experience, ZFS arc cache doesn't always release memory when there is contention (theoretically yes).
I always reduce the ZFS cache to bare minimal if the app data is on a shared storage like SAN or in other words, you don't use JBODs for your app data (be it DB or a file server). If you do use local storage for your app then should give it a reasonable amount of memory for better performance.