1 2 3 Previous Next 32 Replies Latest reply: Feb 11, 2010 1:05 PM by Jonathan Lewis Go to original post RSS
      • 15. Re: Pagination query help needed for large table - force a different index
        751830
        OK, this one here is the most efficient:
        SELECT t2.*
        FROM  
        (
           SELECT t1.*, rownum rnum 
           FROM 
           (
               SELECT /*+ first_rows(100) */  members.*
                FROM members
                WHERE last_name =  'Smith'
                ORDER BY joindate
           ) t1
           WHERE rownum <= 100
        ) t2
        WHERE rnum >= 1
        Here's the explain plan in a nutshell:
        100 rows selected.
        
        
        Execution Plan
        ----------------------------------------------------------
        Plan hash value: 1257535301
        
        ----------------------------------------------------------------------------------------------------
        | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
        ----------------------------------------------------------------------------------------------------
        |   0 | SELECT STATEMENT               |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
        |*  1 |  VIEW                          |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
        |*  2 |   COUNT STOPKEY                |                         |       |       |            |          |
        |   3 |    VIEW                        |                         |   100 | 37300 |  2947   (1)| 00:00:36 |
        |*  4 |     TABLE ACCESS BY INDEX ROWID| MEMBERS                 |   137K|    20M|  2947   (1)| 00:00:
        |   5 |      INDEX FULL SCAN           | JOINDATE_ORDER_IDX      |  5100 |       |    19   (0)| 00:00:01 |
        ----------------------------------------------------------------------------------------------------
        
        Predicate Information (identified by operation id):
        ---------------------------------------------------
        
           1 - filter("RNUM">=001)
           2 - filter(ROWNUM<=100)
           4 - filter("LAST_NAME"='Smith')
        
        
        Statistics
        ----------------------------------------------------------
                 57  recursive calls
                  0  db block gets
                370  consistent gets
                  0  physical reads
                  0  redo size
              15244  bytes sent via SQL*Net to client
                462  bytes received via SQL*Net from client
                  8  SQL*Net roundtrips to/from client
                  0  sorts (memory)
                  0  sorts (disk)
                100  rows processed
        This query is very fast... until I get to a specific point in the paging. Then the query suddenly takes a tremendous amount of time. 1 second vs. 40+ seconds. The degradation is sudden, so I can only assume I've overflowing some working memory and it is performing disk I/Os instead. But I have no idea what exactly it is doing...

        Here's the autotrace for it when paging rows numbered 90001 - 90100:
        100 rows selected.
        
        
        Execution Plan
        ----------------------------------------------------------
        Plan hash value: 1257535301
        
        ----------------------------------------------------------------------------------------------------
        | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
        ----------------------------------------------------------------------------------------------------
        |   0 | SELECT STATEMENT               |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
        |*  1 |  VIEW                          |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
        |*  2 |   COUNT STOPKEY                |                         |       |       |            |          |
        |   3 |    VIEW                        |                         |   100 | 37300 |  2947   (1)| 00:00:36 |
        |*  4 |     TABLE ACCESS BY INDEX ROWID| MEMBERS                 |   137K|    20M|  2947   (1)| 00:00:
        |   5 |      INDEX FULL SCAN           | JOINDATE_ORDER_IDX      |  5100 |       |    19   (0)| 00:00:01 |
        ----------------------------------------------------------------------------------------------------
        
        Predicate Information (identified by operation id):
        ---------------------------------------------------
        
           1 - filter("RNUM">=90001)
           2 - filter(ROWNUM<=90100)
           4 - filter("LAST_NAME"='Smith')
        
        
        Statistics
        ----------------------------------------------------------
                 57  recursive calls
                  0  db block gets
             439471  consistent gets
              18878  physical reads
                  0  redo size
              15723  bytes sent via SQL*Net to client
                462  bytes received via SQL*Net from client
                  8  SQL*Net roundtrips to/from client
                  0  sorts (memory)
                  0  sorts (disk)
                100  rows processed
        Any thoughts? Thanks.
        • 16. Re: Pagination query help needed for large table - force a different index
          Centinul
          Can you use the GATHER_PLAN_STATISTICS hint that I mentioned in my posts? Would you also be able to run a SQL Trace and post the TKPROF results?

          Also, the query plan indicates there is approximately 137K rows coming out of the TABLE ACCESS step. Does that sound like it's in the right ballpark?

          What are the results of the following command:
          SHOW PARAMETER OPTIMIZER
          Thanks!
          • 17. Re: Pagination query help needed for large table - force a different index
            Timur Akhmadeev
            Hi,
            xaeryan wrote:
            Any thoughts? Thanks.
            As already suggested, get runtime execution statistics with GATHER_PLAN_STATISTICS / DBMS_XPLAN.
            From the autotrace it's obvious you are hitting physical reads, which is expected behavior since you can not have all data needed for next pages be present in the buffer cache.
            I see you've tried to use [manual optimization|http://jonathanlewis.wordpress.com/2008/04/27/manual-optimisation] which is not precisely your case.
            You can improve that particular type of query with an index on (last_name) or (last_name, createdtadstp).
            • 18. Re: Pagination query help needed for large table - force a different index
              751830
              Arg, ok this one is frustrating me. If my WHERE condition filters down to a small number of records (say 50) out of the 7 million, suddenly the performance of this part of the query goes to crap (like 5 minutes):
              SELECT /*+ first_rows(100) */ members.*
                      FROM members
                      where last_name = 'Smith'
                      ORDER BY joindate
              If my WHERE filter returns a larger number of records, it goes back to being very quick.
              As a note, my original query (returning just the ROWID in the inner query, then joining it in the outermost query) doesn't suffer from this issue.

              The explain plans for BOTH cases are identical:
              Rows Plan Cost
              100 SELECT STATEMENT   2948
              137878  TABLE ACCESS BY INDEX ROWID MEMBER               2948
              5100   INDEX FULL SCAN JOINDATE_ORDER_IDX      19
              What gives?
              • 19. Re: Pagination query help needed for large table - force a different index
                751830
                Timur Akhmadeev wrote:
                Hi,
                xaeryan wrote:
                Any thoughts? Thanks.
                As already suggested, get runtime execution statistics with GATHER_PLAN_STATISTICS / DBMS_XPLAN.
                From the autotrace it's obvious you are hitting physical reads, which is expected behavior since you can not have all data needed for next pages be present in the buffer cache.
                I see you've tried to use [manual optimization|http://jonathanlewis.wordpress.com/2008/04/27/manual-optimisation] which is not precisely your case.
                You can improve that particular type of query with an index on (last_name) or (last_name, createdtadstp).
                Ok, I will try this. What is the benefit over autotrace? I read here that Gather Plan Stats hint is also just an estimation of the plan: [http://www.dba-oracle.com/t_gather_plan_statistics.htm]
                • 20. Re: Pagination query help needed for large table - force a different index
                  sb92075
                  It needs to obtain 100 rows before showing them to you.
                  Sometimes it takes longer to retrieve the first 100 rows than other times.
                  When fewer than 100 rows returned, it must complete a Full Table Scan before you see the results.
                  • 21. Re: Pagination query help needed for large table - force a different index
                    751830
                    sb92075 wrote:
                    It needs to obtain 100 rows before showing them to you.
                    Sometimes it takes longer to retrieve the first 100 rows than other times.
                    When fewer than 100 rows returned, it must complete a Full Table Scan before you see the results.
                    Ooof, that's rough on this table.

                    But I'm still not figuring out why this query does NOT suffer:
                    SELECT members.*
                    FROM members,
                    (
                        SELECT RID, rownum rnum
                        FROM
                        (
                            SELECT /*+ first_rows(100) */ rowid as RID 
                            FROM members
                            WHERE last_name = 'Frederick'
                            ORDER BY joindate
                        ) 
                        WHERE rownum <= 100 
                    ) 
                    WHERE rnum >= 1 
                             and RID = members.rowid
                    While this one does:
                    SELECT t2.*
                    FROM  
                    (
                       SELECT t1.*, rownum rnum 
                       FROM 
                       (
                           SELECT /*+ first_rows(100) */  members.*
                            FROM members
                            WHERE last_name =  'Frederick'
                            ORDER BY joindate
                       ) t1
                       WHERE rownum <= 100
                    ) t2
                    WHERE rnum >= 1
                    • 22. Re: Pagination query help needed for large table - force a different index
                      Jonathan Lewis
                      xaeryan wrote:
                      Either way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
                      SELECT members.*      -- Select all data from members table
                      FROM members,           -- members table added to FROM clause 
                      (
                      SELECT RID, rownum rnum
                      FROM
                      (
                      SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
                      FROM members
                      WHERE last_name = 'Smith'
                      ORDER BY joindate
                      ) 
                      WHERE rownum <= 100 
                      ) 
                      WHERE rnum >= 1 
                      and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queries
                      Once I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
                      I've run up a quick test on 10.2.0.3 of what you're trying to do here and don't run into the same problem - the plan behaves as expected and obeys the hint. One thought - what does the smaller part of the query do if you remove the hint, maybe what you're seeing is the default plan and not a plan forced by the hint. Have you declared either of joindate or seq to be not null ?

                      Apart from that, your strategy is flawed. There are two parts to it (a) sorting a selected set of rowids and then collecting a subset of the row - and that part of the strategy is reasonable - and (b) forcine Oracle to walk through the (joindate, seq) index - and at first sight this seems to be a very bad idea.

                      You say you have millions of rows, and that the users have many different search criteria. Think about what happens if the users supply a search that returns less than 100 rows in total. You have to walk ALL the data in the table by joindate, checking every row for the search criteria - this will be enormously expensive. Your strategy is only good for the cases where their searches are likely to return a very large fraction of the table and they don't want to see more than the first few pages.

                      Your initial requirement seems to be to work out a strategy for addressing ad hoc queries of the form (and this is the query that goes in the middle of your rownum restriction and join back, of course):
                      select
                              rowid, joindate, seq
                      from
                              tableX
                      where
                              { list of user-specified predicates}
                      ;
                      The trouble is - you want to ensure that you don't visit the table with a lot of random I/O to do this, and you probably want to restrict the execution path to small index range scans of several B-tree indexes, followed by an index hash join.

                      If every search the users supply is a collection of clauses of the form "colX = {constant}" then you could make this work by creating a load of indexes of the form: (colX, joindate, seq). But the number of indexes, the sizes of the range scans, and the cost of index maintenance could mean that it's not a cost-effective solution.

                      Regards
                      Jonathan Lewis
                      http://jonathanlewis.wordpress.com
                      http://www.jlcomp.demon.co.uk

                      To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}
                      {noformat} (lowercase, curly brackets, no spaces) so that the text appears in 
                      fixed format
                      .
                      
                      
                      "For every expert there is an equal and opposite expert"
                      Arthur C. Clarke                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                      • 23. Re: Pagination query help needed for large table - force a different index
                        751830
                        Centinul wrote:
                        Can you use the GATHER_PLAN_STATISTICS hint that I mentioned in my posts? Would you also be able to run a SQL Trace and post the TKPROF results?

                        Also, the query plan indicates there is approximately 137K rows coming out of the TABLE ACCESS step. Does that sound like it's in the right ballpark?

                        What are the results of the following command:
                        SHOW PARAMETER OPTIMIZER
                        Thanks!
                        The query is taking quite a long time to run, but I'll post the stats soon as I can. Nevermind previous question about autotrace vs XPLAN. I researched that DISPLAY_CURSOR shows the actual runtime execution path.

                        As for Show Parameter Optimizer:
                        SQL> show parameter optimizer;
                        
                        NAME                                 TYPE        VALUE
                        ------------------------------------ ----------- -----------
                        optimizer_dynamic_sampling           integer     2
                        optimizer_features_enable            string      10.2.0.1
                        optimizer_index_caching              integer     0
                        optimizer_index_cost_adj             integer     100
                        optimizer_mode                       string      ALL_ROWS
                        optimizer_secure_view_merging        boolean     TRUE
                        • 24. Re: Pagination query help needed for large table - force a different index
                          user503699
                          xaeryan wrote:
                          OK, this one here is the most efficient:
                          SELECT t2.*
                          FROM  
                          (
                          SELECT t1.*, rownum rnum 
                          FROM 
                          (
                          SELECT /*+ first_rows(100) */  members.*
                          FROM members
                          WHERE last_name =  'Smith'
                          ORDER BY createdtadstp
                          ) t1
                          WHERE rownum <= 100
                          ) t2
                          WHERE rnum >= 1
                          Here's the explain plan in a nutshell:
                          100 rows selected.
                          
                          
                          Execution Plan
                          ----------------------------------------------------------
                          Plan hash value: 1257535301
                          
                          ----------------------------------------------------------------------------------------------------
                          | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
                          ----------------------------------------------------------------------------------------------------
                          |   0 | SELECT STATEMENT               |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
                          |*  1 |  VIEW                          |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
                          |*  2 |   COUNT STOPKEY                |                         |       |       |            |          |
                          |   3 |    VIEW                        |                         |   100 | 37300 |  2947   (1)| 00:00:36 |
                          |*  4 |     TABLE ACCESS BY INDEX ROWID| MEMBERS                 |   137K|    20M|  2947   (1)| 00:00:
                          |   5 |      INDEX FULL SCAN           | JOINDATE_ORDER_IDX      |  5100 |       |    19   (0)| 00:00:01 |
                          ----------------------------------------------------------------------------------------------------
                          
                          Predicate Information (identified by operation id):
                          ---------------------------------------------------
                          
                          1 - filter("RNUM">=001)
                          2 - filter(ROWNUM<=100)
                          4 - filter("LAST_NAME"='Smith')
                          
                          
                          Statistics
                          ----------------------------------------------------------
                          57  recursive calls
                          0  db block gets
                          370  consistent gets
                          0  physical reads
                          0  redo size
                          15244  bytes sent via SQL*Net to client
                          462  bytes received via SQL*Net from client
                          8  SQL*Net roundtrips to/from client
                          0  sorts (memory)
                          0  sorts (disk)
                          100  rows processed
                          This query is very fast... until I get to a specific point in the paging. Then the query suddenly takes a tremendous amount of time. 1 second vs. 40+ seconds. The degradation is sudden, so I can only assume I've overflowing some working memory and it is performing disk I/Os instead. But I have no idea what exactly it is doing...

                          Here's the autotrace for it when paging rows numbered 90001 - 90100:
                          100 rows selected.
                          
                          
                          Execution Plan
                          ----------------------------------------------------------
                          Plan hash value: 1257535301
                          
                          ----------------------------------------------------------------------------------------------------
                          | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
                          ----------------------------------------------------------------------------------------------------
                          |   0 | SELECT STATEMENT               |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
                          |*  1 |  VIEW                          |                         |   100 | 38600 |  2947   (1)| 00:00:36 |
                          |*  2 |   COUNT STOPKEY                |                         |       |       |            |          |
                          |   3 |    VIEW                        |                         |   100 | 37300 |  2947   (1)| 00:00:36 |
                          |*  4 |     TABLE ACCESS BY INDEX ROWID| MEMBERS                 |   137K|    20M|  2947   (1)| 00:00:
                          |   5 |      INDEX FULL SCAN           | JOINDATE_ORDER_IDX      |  5100 |       |    19   (0)| 00:00:01 |
                          ----------------------------------------------------------------------------------------------------
                          
                          Predicate Information (identified by operation id):
                          ---------------------------------------------------
                          
                          1 - filter("RNUM">=90001)
                          2 - filter(ROWNUM<=90100)
                          4 - filter("LAST_NAME"='Smith')
                          
                          
                          Statistics
                          ----------------------------------------------------------
                          57  recursive calls
                          0  db block gets
                          439471  consistent gets
                          18878  physical reads
                          0  redo size
                          15723  bytes sent via SQL*Net to client
                          462  bytes received via SQL*Net from client
                          8  SQL*Net roundtrips to/from client
                          0  sorts (memory)
                          0  sorts (disk)
                          100  rows processed
                          Any thoughts? Thanks.
                          Sorry but that looks like a made-up trace details.
                          In both queries, you are usiing "ORDER BY createdtadstp" and you are showing plan that uses index on joindate. How is that possible ??
                          • 25. Re: Pagination query help needed for large table - force a different index
                            751830
                            Jonathan Lewis wrote:

                            I've run up a quick test on 10.2.0.3 of what you're trying to do here and don't run into the same problem - the plan behaves as expected and obeys the hint. One thought - what does the smaller part of the query do if you remove the hint, maybe what you're seeing is the default plan and not a plan forced by the hint. Have you declared either of joindate or seq to be not null ?

                            Apart from that, your strategy is flawed. There are two parts to it (a) sorting a selected set of rowids and then collecting a subset of the row - and that part of the strategy is reasonable - and (b) forcine Oracle to walk through the (joindate, seq) index - and at first sight this seems to be a very bad idea.

                            You say you have millions of rows, and that the users have many different search criteria. Think about what happens if the users supply a search that returns less than 100 rows in total. You have to walk ALL the data in the table by joindate, checking every row for the search criteria - this will be enormously expensive. Your strategy is only good for the cases where their searches are likely to return a very large fraction of the table and they don't want to see more than the first few pages.

                            Your initial requirement seems to be to work out a strategy for addressing ad hoc queries of the form (and this is the query that goes in the middle of your rownum restriction and join back, of course):
                            select
                            rowid, joindate, seq
                            from
                            tableX
                            where
                            { list of user-specified predicates}
                            ;
                            The trouble is - you want to ensure that you don't visit the table with a lot of random I/O to do this, and you probably want to restrict the execution path to small index range scans of several B-tree indexes, followed by an index hash join.

                            If every search the users supply is a collection of clauses of the form "colX = {constant}" then you could make this work by creating a load of indexes of the form: (colX, joindate, seq). But the number of indexes, the sizes of the range scans, and the cost of index maintenance could mean that it's not a cost-effective solution.
                            Thanks for your insight Jonathan (and all others as well). I understand now that if there are less than 100 records (or just a low count in general), it is doing a LOT of walking through my JOINDATE index before it finds 100 records (or hits the end of the index). That seems sound. However, in one of my most recent previous posts, I have an example of the query with my original style (pull rowid from inner query, join on outer query) where filtering down to 50 seconds is still being performed in a very quick manner. The other version of this query (same hints on it) takes a significant amount of time (with small result sets like in this example). It's literally still running and I started it maybe 10 minutes ago, trying to gather stats using DBMS_XPLAN.DISPLAY_CURSOR (Afterwards I plan on doing the same on the quick responding query).

                            (I have restarted the database to make sure I'm not just witnessing buffer results - first time it takes 4-7 seconds, subsequent times are <1 sec)

                            Also note, I have changed from using the INDEX hint to using the FIRST_ROWS hint, since the filter AND the order by are dynamically changing, I wouldn't be able to specify the index by name. Which, as you mention, means it's not effective to come up with every combination of index possible. I have basic B-tree non-unique indexes on the columns that will be filtered, and b-tree UNIQUE indexes on the columns that will be ordered (combined with sequence to make it unique), but that's it.
                            The sequence on this table is defined as NOT NULL, however the rest of the columns allow nulls.
                            • 26. Re: Pagination query help needed for large table - force a different index
                              751830
                              Sorry but that looks like a made-up trace details.
                              In both queries, you are usiing "ORDER BY createdtadstp" and you are showing plan that uses index on joindate. How is that possible ??
                              Sorry about that, the column names aren't exactly the prettiest the table I'm working with, so I've been just renaming the columns to make them a little more reader friendly before posting them here (and as a CYA). The plan is real, the columns JOINDATE and CREATEDTADSTP are synonomous, sorry I missed changing that one... fixed.
                              • 27. Re: Pagination query help needed for large table - force a different index
                                751830
                                FAST query TKPROF:
                                ********************************************************************************
                                
                                SELECT rnum-1, RID, members.*
                                FROM members,
                                (
                                   SELECT RID, rownum rnum
                                   FROM
                                   (
                                       SELECT /*+ first_rows(100) */  members.rowid  as RID
                                        FROM members
                                        where last_name = 'Frederick'
                                        ORDER BY joindate
                                   )
                                   WHERE rownum <= 100
                                )
                                WHERE rnum >= 1
                                    AND RID = members.rowid
                                
                                call     count       cpu    elapsed       disk      query    current        rows
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                Parse        2      0.20       0.20          0          0          0           0
                                Execute      2      0.00       0.00          0          0          0           0
                                Fetch        8      0.01       0.05          3         98          0          88
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                total       12      0.21       0.25          3         98          0          88
                                
                                Misses in library cache during parse: 2
                                Optimizer mode: ALL_ROWS
                                Parsing user id: 5  (SYSTEM)
                                
                                Rows     Execution Plan
                                -------  ---------------------------------------------------
                                      0  SELECT STATEMENT   MODE: ALL_ROWS
                                      0   NESTED LOOPS
                                      0    VIEW
                                      0     COUNT (STOPKEY)
                                      0      VIEW
                                      0       SORT (ORDER BY STOPKEY)
                                      0        TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 
                                                   'MEMBERS' (TABLE)
                                      0         INDEX   MODE: ANALYZED (RANGE SCAN) OF 
                                                    'LAST_NAME_FILTER_IDX' (INDEX)
                                      0    TABLE ACCESS   MODE: ANALYZED (BY USER ROWID) OF 
                                               'MEMBERS' (TABLE)
                                
                                ********************************************************************************
                                SLOW query TKPROF:
                                ********************************************************************************
                                
                                SELECT rnum-1, t2.*
                                FROM
                                (
                                   SELECT t1.*, rownum rnum
                                   FROM
                                   (
                                       SELECT /*+ first_rows(100) */  members.*
                                        FROM members
                                        where last_name = 'Frederick'
                                        ORDER BY joindate
                                   ) t1
                                   WHERE rownum <= 100
                                ) t2
                                WHERE rnum >= 1
                                
                                call     count       cpu    elapsed       disk      query    current        rows
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                Parse        1      0.18       0.18          0          0          0           0
                                Execute      1      0.00       0.00          0          0          0           0
                                Fetch        4     17.46     273.73     148491    4053723          0          44
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                total        6     17.65     273.91     148491    4053723          0          44
                                
                                Misses in library cache during parse: 1
                                Optimizer mode: FIRST_ROWS
                                Parsing user id: 5  (SYSTEM)
                                
                                Rows     Execution Plan
                                -------  ---------------------------------------------------
                                      0  SELECT STATEMENT   MODE: HINT: FIRST_ROWS
                                      0   VIEW
                                      0    COUNT (STOPKEY)
                                      0     VIEW
                                      0      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 
                                                 'MEMBERS' (TABLE)
                                      0       INDEX   MODE: ANALYZED (FULL SCAN) OF 
                                                  'JOINDATE_ORDER_IDX' (INDEX (UNIQUE))
                                
                                ********************************************************************************
                                1.) Can someone tell me why my rowcount is all 0s for the TKPROF explain plans?
                                2.) In the fast query, rows fetched is 88, even though I get back 44... ???
                                3.) Looking at the EXPLAIN PLAN and based on others explanations so far, I think I can see what's going wrong - in the slow query, it does a full index scan, then creates a view based off the results (the whole table), filtering by last_name on the view (no index used), then does the chunking.
                                4.) As I can see above, in the FAST query, it ignores my FIRST_ROWS hint and uses the ALL_ROWS optimizer. So I tried running the slow query again, this time with ALL_ROWS as the hint... the optimizer mode changed, but the plan itself didn't change. Here's the TKPROF:


                                ********************************************************************************
                                
                                SELECT rnum-1, t2.*
                                FROM
                                (
                                   SELECT t1.*, rownum rnum
                                   FROM
                                   (
                                       SELECT /*+ all_rows */  members.*
                                        FROM members
                                        where last_name = 'Frederick'
                                        ORDER BY joindate
                                   ) t1
                                   WHERE rownum <= 100
                                ) t2
                                WHERE rnum >= 1
                                
                                call     count       cpu    elapsed       disk      query    current        rows
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                Parse        1      0.12       0.12          0          0          0           0
                                Execute      1      0.00       0.00          0          0          0           0
                                Fetch        4     16.81    1370.82     141036    4053755          0          44
                                ------- ------  -------- ---------- ---------- ---------- ----------  ----------
                                total        6     16.93    1370.94     141036    4053755          0          44
                                
                                Misses in library cache during parse: 1
                                Optimizer mode: ALL_ROWS
                                Parsing user id: 5  (SYSTEM)
                                
                                Rows     Execution Plan
                                -------  ---------------------------------------------------
                                      0  SELECT STATEMENT   MODE: HINT: ALL_ROWS
                                      0   VIEW
                                      0    COUNT (STOPKEY)
                                      0     VIEW
                                      0      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 
                                                 'MEMBERS' (TABLE)
                                      0       INDEX   MODE: ANALYZED (FULL SCAN) OF 
                                                  'JOINDATE_ORDER_IDX' (INDEX (UNIQUE))
                                
                                ********************************************************************************
                                • 28. Re: Pagination query help needed for large table - force a different index
                                  751830
                                  Sweet!!! After gathering statistics via DBMS_STATS.GATHER_TABLE_STATS, it is now (impressively) smart enough to determine which index to use...
                                  when the WHERE clause would return a small result set (low cardinality), it will use the index on that column, then sort the results. If the WHERE clause would return a large amount of records (high cardinality), it will use the ORDER BY index which is already in order, then filter the results.

                                  But I still get performance degradation if I jump to a page far ahead, regardless of which index is used, although it seems a bit more gradual.
                                  Based on my observations, I'm guessing this is because the index has to be traveled deeper and deeper each time to get the next 100 records. The Oracle caching helps here, as if I don't jump too far, it clearly retrieves data from the buffer.
                                  • 29. Re: Pagination query help needed for large table - force a different index
                                    Jonathan Lewis
                                    xaeryan wrote:


                                    1.) Can someone tell me why my rowcount is all 0s for the TKPROF explain plans?
                                    It looks like you are stopping the trace before closing the cursor
                                    2.) In the fast query, rows fetched is 88, even though I get back 44... ???
                                    Check the parse and execute counts - you've run the query twice
                                    3.) Looking at the EXPLAIN PLAN and based on others explanations so far, I think I can see what's going wrong - in the slow query, it does a full index scan, then creates a view based off the results (the whole table), filtering by last_name on the view (no index used), then does the chunking.
                                    Not quite. In the slow query you happen to scan all the way through the joindate_idx index and visit every row in the table because there are less than 100 rows matching the requirement - if there were more than 100 rows matching the last name, and if they appeared in at the low end of the date range the scan would stop early and the result would return quickly. Notice that the fast query is the one that does a sort - it's had to collect all the (fortunately small amount of) data for that name before sorting it by date.

                                    4.) As I can see above, in the FAST query, it ignores my FIRST_ROWS hint and uses the ALL_ROWS optimizer.
                                    Actually, I don't think it ignored your hint. I'm going to test the following hypothesis, but first_rows(100) tells the optimizer to find the fastest plan to get the first 100 rows - so if it has a cardinality estimate that is less than 100 rows for the query this makes the all_rows plan the fastest plan to get the first 100 rows.

                                    Regards
                                    Jonathan Lewis
                                    http://jonathanlewis.wordpress.com
                                    http://www.jlcomp.demon.co.uk

                                    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}
                                    {noformat} (lowercase, curly brackets, no spaces) so that the text appears in 
                                    fixed format
                                    .
                                    
                                    
                                    "For every expert there is an equal and opposite expert"
                                    Arthur C. Clarke