This discussion is archived
6 Replies Latest reply: Jan 11, 2013 5:36 PM by marksmithusa RSS

Using the Result Cache in 11gR2

marksmithusa Journeyer
Currently Being Moderated
Has anyone had any experience with using the Result Cache? Specifically, I'm running 11.2.0.3 on AIX 5.3 (soon to be 7.1)

I have a system in which I have an extremely high execution count on the top 10 or so statements. This has lead to excessive (soft) parsing and library cache pressure (library cache: mutex X). Naturally, the application isn't going to change to become sensible, so I have to work around as best I can.

I have a shared pool of 3Gb and I set my result_cache_max_size to 512Mb and the mode to FORCE. Of course, this comes out of the shared pool, so I might end up making things worse. But I'd thought I'd give it a try in test.

Does anyone have any advice on how to size the Result Cache properly? I don't see a result_cache_advice view, alas.

Mark
  • 1. Re: Using the Result Cache in 11gR2
    jgarry Guru
    Currently Being Moderated
    I don't, but this seems a good start: http://www.oracle-developer.net/display.php?id=503 it has links to pythian blog.
  • 2. Re: Using the Result Cache in 11gR2
    damorgan Oracle ACE Director
    Currently Being Moderated
    Lots of experience with both Linux and Solaris and it works really well provided those queries are returning the same result set. If all of that churning is producing different outputs the Result Cache isn't going to be of value and, as you point out, may contribute to the issue.

    An excellent example of where the Result Cache produced a tremendous benefit in one application I worked on a few years ago was 17,500,000+ queries per hour that were returning the timezone so it could be written as part of a log record. My theory was that servers weren't changing timezones more frequently than one every year or two and the overhead dropped by orders of magnitude.

    My recommendation to determine optimal settings would be to run some real-time tests and using the DBMS_RESULT_CACHE built-in package which is fully documented and supported will help.
    http://www.morganslibrary.org/reference/pkgs/dbms_result_cache.html
    http://www.oracle.com/pls/db112/search?remark=quick_search&word=dbms_result_cache
  • 3. Re: Using the Result Cache in 11gR2
    P F Explorer
    Currently Being Moderated
    I don't see the correlation between results cache and trying to reduce library cache pressure because of 'library cache: mutex X' waits. I am guess that if your results return faster it would probably make your contention worse.

    How many executions an hour are we talking about here? millions? tens of millions?
    Have you checked for sql with high version counts?
    Are there any sga resize operations?
    Does you application ever invalidate these cursors during heavy load?
    Is it only a problem when you see high 'library cache: mutex X' waits?

    There are some certain very rare circumstances where true contention can be hit and copies of the hot object/s are required. You probably want to talk to someone from Oracle Support for this if you believe it is a true concurrency issue.
  • 4. Re: Using the Result Cache in 11gR2
    Adrian Angelov Explorer
    Currently Being Moderated
    Hi,

    to estimate the required settings for the Result Cache feature, as Daniel Morgan suggested - try to do it in a test environment with real data.

    Do not do it on a production database because of the following bug: Bug 14665745 Excessive "Result Cache: RC Latch" latch gets using DBMS_RESULT_CACHE.memory_report
    (it leads to database hang on a busy database when you run select on v$result_cache_objects or memory_report)

    Based on the v$result_cache_statistics(block_size column)/objects(block_count column value for a record)(on a test env), you'll be able to identify how much space a record for a particular cached statement or function is required. Based on your application, you'll be able to identify the maximum number of records for this particular statement or function and thus estimate the total required space(using blocks/block_size in v$result_cache_statistics).

    Let's say that we have a function that has one argument - customer id and returns a number(0/1). The max number of entries in the cache would be the number of customers(if there is no app logic that might restrict them, for exaple: do not execute in case the customer is active and you know that there are ~50% of all customers active on avarage). With a simple test you'll be able to identify how much space is required to store one record(for 1 unique function invocation) and based on the app functionalities, you'll be able to identify the maximum possible number of records that are going to occupy the cache.

    In our case, we've patched with one-off patches on top of 11.2.0.2 the following bugs:

    ORA-00600 [qesrcDO_RemRO] When Result Cache Is Enabled in 11.2 [ID 1373176.1]
    Bug 12737666 - Assorted internal errors / result cache corruption [ID 12737666.8]
    There is another bug that causes problems when more than 1GB usage for the result cache

    For example, we've had situations where the status of the Result Cache became 'CORRUPT'(an undocumented status, but you can verify it from the specification of the DBMS_RESULT_CACHE package ) and once the status became corrupt from valid, an attempt to flush the result cache led to ORA-00600. The only way to flush the cache and start using it again was to restart the instance(not an acceptable workaround in our case).

    Based on the above, I would recommend :
    - investigate for bugs on Metalink that might affect you(your version is different from ours)
    - patch a test environment and test as accurately as possible (try to reproduce activity similar to you production one that uses the statements/functions in question)

    Edited by: Adrian Angelov on Jan 10, 2013 2:13 PM

    Edited by: Adrian Angelov on Jan 10, 2013 2:14 PM
  • 5. Re: Using the Result Cache in 11gR2
    marksmithusa Journeyer
    Currently Being Moderated
    Hi, all,

    Great comments in this thread. Thanks a lot for the excellent input and the links were great too!

    We're running a copy of our Production batch in a volumetric test environment today. I look forward to seeing whether we can any bang for the buck - or whether it just goes BANG!

    This batch has a set of statements which run tens of millions of times per hour AND we have extensive use of VPD – pretty much for every one of the statements with the high execution count, we’re also calling the VPD package. So for the hundreds of millions of executions of application code, you can pretty much estimate that you can double that because of the VPD.

    This database is very important for the whole enterprise as a system of record, thus we have a lot of incoming requests from across the company to this database. As such, the application team cannot determine where/why we have so many executions of these statements (I’m fighting a losing battle there. I’ve tried, oh, my, have I tried).

    Now all of these high executing statements are using bind variables (thankfully, else we’d be screwed) – so the statements are the same, but, I guess, it all depends on whether their results will be or not as to whether Result Cache will blow up in my face or make me look like a genius.

    PF, this, hopefully goes some way to explaining why I’m hoping that using the RC will help. I did open an SR about Oracle to confirm my initial opinion that the sheer number of executions = excessive soft parsing = library cache contention.

    I guess I’ll see what the tests show – it’s comforting to know that it does work 

    Mark
  • 6. Re: Using the Result Cache in 11gR2
    marksmithusa Journeyer
    Currently Being Moderated
    Adrian Angelov wrote:
    Do not do it on a production database because of the following bug: Bug 14665745 Excessive "Result Cache: RC Latch" latch gets using DBMS_RESULT_CACHE.memory_report
    (it leads to database hang on a busy database when you run select on v$result_cache_objects or memory_report)
    You make a good point, Adrian. In our initial testing, it looked like my querying v$result_cache_objects resulted in latches for all the running processes at the time. It was very odd. And I have a tough time killing the offending session, too.

    Another thing to note is that, so far, the performance has been considerably worse. We haven't even got to the main body of the batch in test yet and one job - which admittedly could have a substantial cursor (and thus not like the fact I've stolen some memory from the shared pool) - has run for 3 hours instead of 34 minutes.

    It's not looking like something that's going to work for this particular database - but I did create a 512Mb result cache and set the mode to FORCE. Maybe being more specific with certain results for it to be of benefit for us. I'm glad we have test environments where we can try this stuff out!

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points