This content has been marked as final. Show 2 replies
Are we talking, by any chance, of your library cache? Specifically, did you compute those numbers by looking at V$LIBRARYCACHE? It helps to know these sorts of things.
Forget pinhitratio: it's irrelevant for any practical tuning problem you could conceivably come up with.
You have an abysmal gethitratio. That means nearly every time you waltz into the library cache hoping to find a pre-compiled execution plan, you draw a blank and therefore have to parse your SQL statement from scratch (a 'hard parse'). Not good for your latches or your CPU, and pretty bad for your Shared Pool size, because you'll be consuming memory in it like crazy.
First thing: measure your RELOADS and PINS. If you've got a lot of reloads, your shared pool is too small, your execution plans are ageing out quickly, and a bit of an increase in shared pool size will help retain more of them for longer -and thus increase your likelihood of a hit.
Second: measure your INVALIDATIONS. If you've got loads of these, you're doing lots of DDL. DDL on tables causes any execution plan that references those tables to be expired, so naturally next time you query them, you have to re-parse and your hit ratio takes a dive (and your miss ratio goes through the ceiling). The cure there is to stop doing DDL in a production database! Nothing else will save you, and increasing shared pool size is definitely not the answer.
If reloads is low and invalidations is low, then almost certainly (and most commonly) the cause of your problems is that the SQL being sent to the database doesn't use bind variables. It's using literals which makes 'select * from emp where ename=BOB' look completely different from 'select * from emp where ename=SUE' -even though the execution plan in both cases is probably the same (such as 'use the index on ENAME etc etc'). Using bind variables mean you submit a query that reads 'select * from emp where ename =:SOMEVARIABLE' -and your code then 'binds' a specific value into that variable **after** it's been parsed. So you end up with one execution plan, and lots of queries for different specific values all using that one plan -meaning lots of hits in the library cache and a lower miss ratio.
Short answer, therefore: if you've got a reasonably sized shared pool (say, a couple hundred megs) and if you're not doing vast amounts of DDL, it's the quality of your code that's at issue, and it will need to be re-written to make use of binds.
Please tell me one thing I have regularly late night collect system statistic and rebuild indexes is that have impact on librarycache.