1 2 Previous Next 28 Replies Latest reply: Nov 9, 2009 11:28 AM by Niall Litchfield RSS

    Problem with the cache hit ratio

    sono99
      Hello,

      I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!

      1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.

      In oracle cache hit ratio seems to be:
      cache hits / cache lookups,
      where cache hits <=> logica IO - physical reads
      cache lookups <=> logical IO

      Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:

      -----
      SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
      FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
      WHERE
      P1.name = 'db block gets' AND
      P2.name = 'consistent gets' AND
      P3.name = 'physical reads' AND
      P4.name = 'sga_max_size'
      -----


      2) The problem:
      The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
      And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.

      Some of the relevant stats, wait events:
      db file scattered read     1129,93 seconds

      Elapsed time: 1311,9 seconds
      CPU time: 179,84

      SGA max Size: 314572800 Bytes

      And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
      the source txt loaded to the database was aprox 16 G
      Number of reads was like 4.5 times the source datafile.


      I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
      Cache hit Ratio: 0,92
      Cache hits: 109680186
      Cache lookups: 119173819

      I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.


      3) Procedure of taking stats:
      Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
      But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.

      Could the problem, assuming there is one, be related to this:

      "The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."

      What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.

      Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?
        • 1. Re: Problem with the cache hit ratio
          sb92075
          http://oracledba.co.uk/tips/choose.htm
          Choose any value you desire for BCHR with above PL/SQL.

          BCHR is a Mythical Indicator of Performance
          • 2. Re: Problem with the cache hit ratio
            sono99
            This would be great:
            ----
            --
            -- Flipping the formula we can work out how many more consistent gets
            -- we need to increase the hit ratio
            --
            v_additional_congets := trunc(v_phy/(1-p_ratio/100)-v_db - v_con);

            ----

            if I wanted to manipulate my hit rate, which i don't!

            Buy wait a second, I i understood the trick correctly, you get an adition db_consistent-get, for each row you fetch from a databse block?

            If you have a database block, and you read 200 rows from that database block, those are 200 consistent gets?
            Therefore, table scanning when you need all the rows means that you have a perfect hit rate!
            Rgiht?

            But couldn't I change the cache hit rate formula to simply:

            1 - physical_reads/db_blcok_gets.
            Where db block gets would be all the database blocks I wanted to fetch, and physical reads are the number of physical blocks I retrieved from disk.

            I could do this right?
            Basically what I need to know is:
            db block gets is always >= physical reads?

            If it isn't i have a problem. if it Is i can obtain a cache hit rate that actually makes sense. I don't want to consider gets for rows, I wanna know how many database blocks I needed to retrieve because One row was contained in database block that was not on cache.
            • 3. Re: Problem with the cache hit ratio
              635471
              Cache hit ratios are largely deprecated as a tuning methodology. Tune the queries, and monitor the cache sizes using the buffer cache adice functionality.
              • 4. Re: Problem with the cache hit ratio
                Hemant K Chitale
                Ground Rules :
                0.0. Do NOT aim for a 'target' Cache Hit Ratio.
                0.1. Do NOT compute a Cache Hit Ratio.
                1.0. Formulae you come up with will not or might not be factoring Direct Path Reads.

                You would be better of starting with looking at the number of logical reads. Is an SQL (or all SQLs in the Instance) requiring too many logical reads ? A major part of performance tuning / SQL rewrites is to reduce the number of logical reads.
                If you have a database block, and you read 200 rows from that database block, those are 200 consistent gets?
                Wrong.
                Therefore, table scanning when you need all the rows means that you have a perfect hit rate!
                A table scan would be doing 1 'consistent get' for a table block, irrespective of the number of rows in that block.
                If you retrieve rows singly (e.g. a Nested Lookup to the table OR a PLSQL cursor that does fresh fetch for each row), then, every repeated retrieval of the same block is a new 'consistent get'.


                Hemant K Chitale
                http://hemantoracledba.blogspot.com
                • 5. Re: Problem with the cache hit ratio
                  Jonathan Lewis
                  user10282047 wrote:
                  Hello,

                  I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
                  The numbers you get will depend on (amongst other things)
                  The version of Oracle
                  The tool (and settings) you are using to run the query
                  The actual query

                  It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.

                  Suggestions:

                  Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself. (Note - the method you describe should have the result you want provided no-one else has been using the system and caused other activity in v$sysstat).

                  If the query really is nothing but a UNION ALL of five tablescans, then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads. If the arraysize is one then the number of block visits could be close to the number of rows fetched as a block has to be released at the end of each fetch and the reacquired for the next fetch.

                  Don't forget the effect of block cleanout and read-consistency - check for statistics "redo size" and "redo entries" for indications of cleanout, and "consistent gets - examination" for read-consistency. You may find that a lot of your block visits are to undo blocks.

                  Regards
                  Jonathan Lewis
                  http://jonathanlewis.wordpress.com
                  http://www.jlcomp.demon.co.uk

                  To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}
                  {noformat} (lowercase, curly brackets, no spaces) so that the text appears in 
                  fixed format
                  .
                  
                  "Science is more than a body of knowledge; it is a way of thinking" 
                  Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
                  • 6. Re: Problem with the cache hit ratio
                    sono99
                    Hi,s
                    first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.

                    -----
                    quote:
                    It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
                    -----


                    First, the query code:

                    ----
                    2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
                    [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
                    -----


                    Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.

                    Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.

                    I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.

                    Most of the I/O is reading, a few of the I/O is writing.
                    My idea is not to optimize this query, it is to understand how it performs.
                    I have other implementations to test, namely I having trouble with one of them.


                    Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.

                    -------
                    Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
                    ---------
                    No, I don't want to that!
                    I avoid as much as possible having the code I execute being implemented in java.
                    When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust.
                    So yes, I only connect to the database with JDBC... in very last session.
                    Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.

                    But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.

                    Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.

                    ------
                    then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
                    ------

                    I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.

                    By the way, I don't get the query results into my client, the query results are dumped into a target output table.
                    So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.

                    As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.
                    I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.


                    -----
                    "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
                    "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
                    "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
                    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
                    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                    "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                    ------

                    This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
                    Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.

                    Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
                    The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
                    But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.

                    But this is just my feeling.
                    Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
                    One for each table block?

                    My best regards,
                    Nuno (99sono xp).
                    • 7. Re: Problem with the cache hit ratio
                      108476
                      Hi Nuno,
                      let me start by saying that there were many things in your post that you mentioned that I could no understand.
                      That's a common comment . . .

                      Remember, a "hit ratio" ONLY measures the propensity of a data block to be in the buffer upon RE-READ.

                      If your DDL is not re-reading any data blocks, your query must read them all from disk anyway, unless you pre-cache them.

                      Here is a plain English description about why measuring your hit ratio will not help you here:

                      http://www.dba-oracle.com/m_data_buffer_hit_ratio.htm
                      Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing?
                      In general, the goal is to fetch the rows that you want with a minimum of consistent gets.

                      But in your case, you must, by your DDL, be performing full-table scans . .

                      Do you have multiple CPU's on the server?

                      If so, you could parallelize the full-scans to make them run faster:

                      http://www.oracle-training.cc/oracle_tips_opq.htm

                      Hope this helps . . .

                      Donald K. Burleson
                      Oracle Press author
                      Author of "Oracle Tuning: The Definitive Reference"
                      http://www.rampant-books.com/t_oracle_tuning_book.htm
                      "Time flies like an arrow; Fruit flies like a banana".
                      • 8. Re: Problem with the cache hit ratio
                        jgarry
                        sono99 wrote:
                        Hi,s
                        first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.
                        Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.

                        >
                        -----
                        quote:
                        It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
                        -----


                        First, the query code:

                        ----
                        2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
                        [2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
                        -----


                        Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.
                        There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.

                        >
                        Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.
                        >
                        I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.
                        Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.

                        >
                        Most of the I/O is reading, a few of the I/O is writing.
                        My idea is not to optimize this query, it is to understand how it performs.
                        Well, have you read the Concepts manual?
                        I have other implementations to test, namely I having trouble with one of them.


                        Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.
                        Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].

                        >
                        -------
                        Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
                        ---------
                        No, I don't want to that!
                        I avoid as much as possible having the code I execute being implemented in java.
                        Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
                        When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust.
                        Er, is that sqlplus?
                        So yes, I only connect to the database with JDBC... in very last session.
                        Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.

                        But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.
                        Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.

                        >
                        Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.
                        No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.

                        >
                        ------
                        then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
                        ------

                        I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.
                        You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.

                        >
                        By the way, I don't get the query results into my client, the query results are dumped into a target output table.
                        So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.
                        You may hear this phrase a lot:

                        "It depends."

                        >
                        As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.
                        What?
                        JG@TTST> explain plan for create table jjj as select * from product_master;
                        
                        Explained.
                        JG@TTST> select count(*) from plan_table;
                        
                          COUNT(*)
                        ----------
                                 3
                        I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.


                        -----
                        "Optimizer"     "Cost"     "Cardinality"     "Bytes"     "Partition Start"     "Partition Stop"     "Partition Id"     "ACCESS PREDICATES"     "FILTER PREDICATES"
                        "SELECT STATEMENT"     "ALL_ROWS"     "2563"     "586110"     "15238860"     ""     ""     ""     ""     ""
                        "UNION-ALL"     ""     ""     ""     ""     ""     ""     ""     ""     ""
                        "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "512"     "117222"     "3047772"     ""     ""     ""     ""     ""
                        "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                        "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                        "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                        "TABLE ACCESS(FULL) SONO99.PROFILE"     ""     "513"     "117222"     "3047772"     ""     ""     ""     ""     ""
                        ------

                        This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
                        Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.

                        Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
                        The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
                        But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.

                        But this is just my feeling.
                        Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
                        Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
                        One for each table block?
                        It depends.

                        >
                        My best regards,
                        Nuno (99sono xp).
                        • 9. Re: Problem with the cache hit ratio
                          Hemant K Chitale
                          You are doing a UNION ALL of the same table 5 times.
                          The first pass to read the table may well require Physical Read calls.
                          If the Buffer Cache is large enough relative to the table size, all subsequent 4 passes would be cache hit.
                          You would have done a Logical Read of the table 5 times, of which 1 occurence would also have required Physical Read.
                          Why would you expect Oracle to not to read the blocks again ? If you expect the runtime engine to be "smart" enough to say "I have already read the table once, I don't need to read it again", that would be an incorrect expectation. Furthermore your UNION ALL specifies that you want 5 copies of each row. Oracle has to read the table 5 times to read each row 5 times to create 5 images of each row.

                          Note : 'db file scattered read' is a wait event indicating a multiblock read call.


                          Hemant K Chitale
                          • 10. Re: Problem with the cache hit ratio
                            706417
                            Connor McDonald's BCHR thing is meaningless: it is, by its very design, artificial and has no relation to a real world system. How this rubbish ever got popularized is baffling.

                            Anyway, here's another amazing "fact" to shock & awe the braindead who swallow whole and then regurgitate McDonald's utterly, utterly vapid pap:

                            Selecting sysdate from dual between the hours of 14:00 and 15:00 will shut down the server and email your boss to tell him/her that he/she smells of onion.



                            Regards - Don Lewis
                            • 11. Re: Problem with the cache hit ratio
                              Fahd.Mirza
                              Dont worry about ratios, worry about waits.

                              regards
                              • 12. Re: Problem with the cache hit ratio
                                Hans Forbrich
                                sono99 wrote:
                                If you have a database block, and you read 200 rows from that database block, those are 200 consistent gets?
                                The usual interpretation is:

                                DB Block Get = Current block ... that is a block that has the 'current data' whether inside a transaction or after commit
                                Consistent Get = An older version of the block, often created by 'getting the current block and applying undo' used to support a consistent read.

                                Indeed, a single block may have several 'copies', each at a different point in time, used to satisfy the requirement that "Oracle never allows a third-party dirty read against a regular database block/row"

                                (I like Mark Bobak's explanation in this chain http://www.orafaq.com/maillist/oracle-l/2003/12/18/1452.htm)
                                But couldn't I change the cache hit rate formula to simply:

                                1 - physical_reads/db_blcok_gets.
                                Where db block gets would be all the database blocks I wanted to fetch, and physical reads are the number of physical blocks I retrieved from disk.

                                I could do this right?
                                You can change the formula to whatever you want.

                                The original intent of the formula was to give a relative inidication of the amount of SQL that was able to occur against data already in memory vs how much needed 'refresh' from disk. But there have been all sorts of exceptions, conditions, and changes to the way Oracle works (an to computing and operating systems in general) that the original formulae became redundant, or had holes, or didn't reflect things like RAC Cache FUsion, or ...

                                So change it to what makes sense to you. It will probably be as meaningful as any other variant. The challenge it that it is a ratio - even if it has all the right parts, that alone can obscure the meaningful information.
                                Basically what I need to know is:
                                db block gets is always >= physical reads?
                                Bottom line is you want to decide what the ratio means to you, and then come up with a fomula. Ideally you will include all the parts that deal with reading from disk vs deal with reading from DB Buffer cache. Then you need to decide whether to include temporary and undo segments and so on. (To do this accurately needs a fair knowledge of the database internals.) Then you will identify the parts that are negligible (and identify when they need to be resurrected) and reduce the formula to a manageable size.

                                However, as others have mentioned, buffer cache hit ratio is no longer considered an absolute tuning tool. That is to say, you should not blindly attempt to get the ratio to a certain value.

                                Instead, it may be useful as a relative indicator - IF you have a specific workload, and manage to capture the BCHR, and have a certain satisfaction about the performance; AND you have a change in performance, and you have the same workload and the BCHR is significantly differrent; THEN you have a moderately strong reason to investigate a tuining issue.

                                Then again, if you have a change in performance, you probably have that reason without evaluating the BCHR anyway.


                                Note that Connor's script does exactly one thing (for me): it verifies that changing the workload can yeld different BCHR. This however, says absolutely nothing about performance changes (if indeed any occur). So perhaps better thing to do is to come up with a formula that defines 'workload'.
                                • 13. Re: Problem with the cache hit ratio
                                  Niall Litchfield
                                  Don Lewis wrote:
                                  Connor McDonald's BCHR thing is meaningless: it is, by its very design, artificial and has no relation to a real world system. How this rubbish ever got popularized is baffling.
                                  well, at least partly, because at the time it was written conventional wisdom suggested that it was important to see how efficiently your system was executing a given workload, rather than addressing how efficient the workload itself was. So people genuinely did configure systems so that they would read 1gb of data from memory rather than 1mb from disk in order to return a few hundred rows. Connor's example exactly does a pile of useless work in order to 'improve' the, then, most common kpi for an Oracle database. It should at least show that things that improve the hit ratio may well be doing absolutely nothing for performance.

                                  Anyway, here's another amazing "fact" to shock & awe the braindead who swallow whole and then regurgitate McDonald's utterly, utterly vapid pap:

                                  Selecting sysdate from dual between the hours of 14:00 and 15:00 will shut down the server and email your boss to tell him/her that he/she smells of onion.
                                  more wit and wisdom from the 80k a year man I see.

                                  Niall
                                  • 14. Re: Problem with the cache hit ratio
                                    706417
                                    Hi Niall,

                                    McDonald's BCHR thing is junk - just as this:

                                    +"Selecting sysdate from dual between the hours of 14:00 and 15:00 will shut down the server and email your boss to tell him/her that he/she smells of onion."+

                                    is also junk.

                                    But both are doable. Just as McDonald messed about to create a ridiculously artificial scenario, then so can I... so can anyone. It ain't smart and it ain't clever. Mindlessly serving-up warmed-through McDonald's: never a healthy thing to do.

                                    For those interested in the shutdown/onion thing: simply have a job to scan your favourite v$ structure for "SELECT SYSDATE FROM DUAL" then have a scheduler job send an email to your boss and then run an OS script to reboot the server (root password/sudo access required). As I said, junk, but no less trashy than McDonald's stupid, meaningless, laughably inane, pretending to mean something barf. And that you seek to explain and justify it is even worse.

                                    Your blog: it stinks.

                                    My salary: if it bothers you (and I bet it does, given that your decision to stay safe with the lamentable Lambeth Council has backfired owing to their pension fund being at least £172 million down) then be big 'n' bold and get a real job. Getting catty over other folks' pay won't help. Silly.
                                    1 2 Previous Next