This content has been marked as final. Show 7 replies
As far as I'm aware, vm.pagecache parameter belongs to the 2.2 and 2.4 kernel is not a configurable parameter for the 2.6 kernel, or at least was omitted and did not do anything when set.
From what I understand, the vm.dirty parameters can be configured to define when the kernel should write dirty pages back to disk and thereby (perhaps) reduce the amount of I/O required when syncing or dropping the buffer cache. Perhaps vfs_cache_pressure and vm_swappiness could be related too. I guess it depends what you are trying to fix.
The kernel parameters of the vm subsystem are explained at http://www.kernel.org/doc/Documentation/sysctl/vm.txt
Thanks for the reply.
What I'm trying to do is tune the system to reduce the number of pages that the VMM will use for file caching. I am coming from an AIX background where we tune the VMM to minimize virtual memory usage for caching and maximize it for computational use. This is the recommendation from both Oracle and IBM when tuning AIX systems for primarily RDBMS workloads. Filesystem caching is counter-productive when running databases since the db will do its own caching of data within the SGA, and will do a much better job of it. In this context, filesystem caching adds unneeded overhead.
We are running the databases with the filesystemio_options=setall parameter in order to use direct io, which should bypass the filesystem cache, but we still see high memory use for the cache.
I've considered playing with the swappiness parameter, but we see very little swap file usage to begin with, so that may not change anything
Whether or not the buffer cache adds additional overhead depends on what is being cached and whether or not the cache needs to being scanned. What you see in the buffer cache may not be used by your database at all.
From what I understand, the buffer could simply be a file system cache, which the kernel decided to fill because of unused memory. The buffer cache is automatically reclaimed when the kernel or programs demand memory. Unless your system crashes or you have to flush a large buffer cache, it is most likely more beneficial than causing any problem.
I imagine that the people who develop the kernel are intelligent enough to properly evaluate the behavior of the kernel buffer cache and are well aware of possible shortcomings. The fact that it cannot be disabled or tuned like in previous versions tells me that there is no need to worry about it anymore.
I agree that they most likely know what they're doing. I'm trying to reconcile what I've learned and done in the aix world regarding oracle db performance with how things are done in the linux world.
IBM places a lot of importance on minimizing file system cache usage when running primarily RDBMS workloads. They make a good case for changing default behavior.
If you're interested, this document is chock ful of useful info on tuning systems for database usage. Yes, it is aix oriented, but still good reading in general.
IBM provides excellent documentation.
Unfortunately, to find such information for Linux requires to get bits and pieces from here or there. The available documentation has often too much room for interpretation if you already have a technical background, or is too detailed, making it difficult to comprehend. However, I guess the problem pretty much applies to all products developed during the last decade.
I have honestly no experience with AIX. My experience with commercial Unix systems is limited to Tru64, which is all RIP now. IT is a strange business, often favoring trivial over more superior solutions, eventually coming around in the end. Linux has certainly come a long way. However, it is under constant development and can implement drastic changes, for instance, raw disk support.
By the way, OEL was renamed to Oracle Linux. OL uses the Oracle UEK kernel by default, which is optimized for Oracle products. Product patches and errata can be downloaded for free from Oracle Public Yum. To simplify the initial installation, you can install the "oracle-rdbms-server-11gR2-preinstall" package. It does the setup of kernel parameters, oracle groups and accounts, and triggers the installation of prerequisite software for Oracle DB installation.
For performance reasons, you might want to setup the system to use kernel hugepages, which is apparently the number 1 performance issue related to Oracle under Linux. Hugepages are a significant drop in server resource utilization for memory and processing. The following links should be helpful:
/dev/shm on Oracle Linux 6.x to run Oracle 11g R2 - manual configuration?
Re: Understanding kernel hugepages and Oracle 11g AMM compatibility
Thanks for the feedback.
I will look at using huge pages. I've been slowly accumulating useful administration and tuning information for OL and enterprise linux in general. I'm reluctantly migrating from aix to linux due to a company aquisition. OL does look to be a solid well performing product, but I will miss the excellent hardware and software support I get from IBM.
I've been riding the IT train since punch cards and 9 track tapes, so I've seen massive change, both good and bad. The journey continues...
I remember the times when Linux, and PC hardware performance and reliability in particular, was pretty much a joke. And I find it strange having a x86_64 CPU while everything has finally moved to 64-bit, 18 years later. However, given the available options today, I honestly would not want to run Oracle Database on anything else than Oracle Linux.