Forum Stats

  • 3,757,148 Users
  • 2,251,201 Discussions
  • 7,869,743 Comments

Discussions

Explain plan control - hash join running long

Sturla Thor
Sturla Thor Member Posts: 14 Blue Ribbon
edited Apr 10, 2015 3:28PM in General Database Discussions

Hi all,

Wanted to see if anybody could give me pointers for a sql I'm trying to "tune".  This is actually code written by Oracle (EBS standard) so I don't have too much control over the sql.

History first, this sql is part of a batch job that runs every day and is inserting records into xla_trial_balances table.

The sql is usually inserting around 5000-10.000 records in each batch job and the records are selected from 10 tables, some of them very large (will provide stats).

This query is taking about 10 hours or more to run which seems a lot of time for 5000-10000 rows.

Attached is the sql itself (fkzwp9dwy8ng0.txt) and the sql-monitor overview of the execution.  Here is the explain plan from the latest execution:

SQL>  SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('fkzwp9dwy8ng0',1));
SQL_ID  fkzwp9dwy8ng0, child number 1
-------------------------------------
    INSERT INTO xla_trial_balances (           record_type_code
 ,source_entity_id          ,event_class_code
,source_application_id          ,applied_to_entity_id
,applied_to_application_id          ,gl_date
,trx_currency_code          ,entered_rounded_dr
,entered_rounded_cr          ,entered_unrounded_dr
,entered_unrounded_cr          ,acctd_rounded_dr
,acctd_rounded_cr          ,acctd_unrounded_dr
,acctd_unrounded_cr          ,code_combination_id
,balancing_segment_value          ,natural_account_segment_value
  ,cost_center_segment_value          ,intercompany_segment_value
   ,management_segment_value          ,ledger_id
,definition_code          ,party_id          ,party_site_id
,party_type_code          ,ae_header_id          ,generated_by_code
     ,creation_date          ,created_by          ,last_update_date
     ,last_updated_by          ,last_update_login


Plan hash value: 853743902


---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                       | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                                |                           |       |       |  3050 (100)|          |       |       |
|   1 |  LOAD TABLE CONVENTIONAL                        |                           |       |       |            |          |       |       |
|   2 |   HASH GROUP BY                                 |                           |     1 |   412 |  3050   (2)| 00:00:37 |       |       |
|*  3 |    FILTER                                       |                           |       |       |            |          |       |       |
|*  4 |     HASH JOIN                                   |                           |     1 |   412 |  3049   (2)| 00:00:37 |       |       |
|   5 |      NESTED LOOPS                               |                           |       |       |            |          |       |       |
|   6 |       NESTED LOOPS                              |                           |     1 |   307 |    18   (0)| 00:00:01 |       |       |
|   7 |        NESTED LOOPS OUTER                       |                           |     1 |   271 |    17   (0)| 00:00:01 |       |       |
|   8 |         MERGE JOIN CARTESIAN                    |                           |     1 |   251 |    16   (0)| 00:00:01 |       |       |
|   9 |          NESTED LOOPS                           |                           |       |       |            |          |       |       |
|  10 |           NESTED LOOPS                          |                           |     1 |   236 |    15   (0)| 00:00:01 |       |       |
|  11 |            NESTED LOOPS OUTER                   |                           |     1 |   183 |    13   (0)| 00:00:01 |       |       |
|  12 |             NESTED LOOPS                        |                           |     1 |   126 |     8   (0)| 00:00:01 |       |       |
|  13 |              NESTED LOOPS                       |                           |     1 |    74 |     6   (0)| 00:00:01 |       |       |
|  14 |               NESTED LOOPS                      |                           |     1 |    28 |     2   (0)| 00:00:01 |       |       |
|  15 |                TABLE ACCESS BY INDEX ROWID      | GL_LEDGERS                |     1 |     7 |     1   (0)| 00:00:01 |       |       |
|* 16 |                 INDEX UNIQUE SCAN               | GL_LEDGERS_U2             |     1 |       |     0   (0)|          |       |       |
|  17 |                TABLE ACCESS BY INDEX ROWID      | FND_CURRENCIES            |   249 |  5229 |     1   (0)| 00:00:01 |       |       |
|* 18 |                 INDEX UNIQUE SCAN               | FND_CURRENCIES_U1         |     1 |       |     0   (0)|          |       |       |
|  19 |               TABLE ACCESS BY GLOBAL INDEX ROWID| XLA_AE_LINES              |     1 |    46 |     4   (0)| 00:00:01 | ROWID | ROWID |
|* 20 |                INDEX RANGE SCAN                 | XLA_AE_LINES_C2_9529961   |     1 |       |     3   (0)| 00:00:01 |       |       |
|  21 |              PARTITION LIST ITERATOR            |                           |     1 |    52 |     2   (0)| 00:00:01 |   KEY |   KEY |
|* 22 |               TABLE ACCESS BY LOCAL INDEX ROWID | XLA_AE_HEADERS            |     1 |    52 |     2   (0)| 00:00:01 |   KEY |   KEY |
|* 23 |                INDEX UNIQUE SCAN                | XLA_AE_HEADERS_U1         |     1 |       |     1   (0)| 00:00:01 |   KEY |   KEY |
|  24 |             PARTITION LIST ITERATOR             |                           |     1 |    57 |     5   (0)| 00:00:01 |   KEY |   KEY |
|* 25 |              TABLE ACCESS BY LOCAL INDEX ROWID  | XLA_DISTRIBUTION_LINKS    |     1 |    57 |     5   (0)| 00:00:01 |   KEY |   KEY |
|* 26 |               INDEX RANGE SCAN                  | XLA_DISTRIBUTION_LINKS_N3 |    11 |       |     3   (0)| 00:00:01 |   KEY |   KEY |
|* 27 |            INDEX UNIQUE SCAN                    | GL_CODE_COMBINATIONS_U1   |     1 |       |     1   (0)| 00:00:01 |       |       |
|* 28 |           TABLE ACCESS BY INDEX ROWID           | GL_CODE_COMBINATIONS      |     1 |    53 |     2   (0)| 00:00:01 |       |       |
|  29 |          BUFFER SORT                            |                           |     1 |    15 |    14   (0)| 00:00:01 |       |       |
|* 30 |           TABLE ACCESS BY INDEX ROWID           | XLA_TB_DEFINITIONS_B      |     1 |    15 |     1   (0)| 00:00:01 |       |       |
|* 31 |            INDEX RANGE SCAN                     | XLA_TB_DEFINITIONS_B_N1   |     1 |       |     0   (0)|          |       |       |
|  32 |         TABLE ACCESS BY INDEX ROWID             | XLA_LEDGER_OPTIONS        |     2 |    40 |     1   (0)| 00:00:01 |       |       |
|* 33 |          INDEX UNIQUE SCAN                      | XLA_LEDGER_OPTIONS_U1     |     1 |       |     0   (0)|          |       |       |
|* 34 |        INDEX UNIQUE SCAN                        | XLA_EVENT_TYPES_B_U2      |     1 |       |     0   (0)|          |       |       |
|  35 |       TABLE ACCESS BY INDEX ROWID               | XLA_EVENT_TYPES_B         |     1 |    36 |     1   (0)| 00:00:01 |       |       |
|  36 |      TABLE ACCESS STORAGE FULL                  | XLA_TB_DEF_SEG_RANGES     |   754K|    75M|  3025   (2)| 00:00:37 |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------


   3 - filter(:7<=:8)
   4 - access("XTD"."DEFINITION_CODE"="XSR"."DEFINITION_CODE")
       filter((NVL("GCC"."SEGMENT1",'0')>=NVL(NVL("XSR"."SEGMENT1_FROM","GCC"."SEGMENT1"),'0') AND
              NVL("GCC"."SEGMENT1",'0')<=NVL(NVL("XSR"."SEGMENT1_TO","GCC"."SEGMENT1"),'0') AND
              NVL("GCC"."SEGMENT2",'0')>=NVL(NVL("XSR"."SEGMENT2_FROM","GCC"."SEGMENT2"),'0') AND
              NVL("GCC"."SEGMENT2",'0')<=NVL(NVL("XSR"."SEGMENT2_TO","GCC"."SEGMENT2"),'0') AND
              NVL("GCC"."SEGMENT3",'0')>=NVL(NVL("XSR"."SEGMENT3_FROM","GCC"."SEGMENT3"),'0') AND
              NVL("GCC"."SEGMENT3",'0')<=NVL(NVL("XSR"."SEGMENT3_TO","GCC"."SEGMENT3"),'0') AND
              NVL("GCC"."SEGMENT4",'0')>=NVL(NVL("XSR"."SEGMENT4_FROM","GCC"."SEGMENT4"),'0') AND
              NVL("GCC"."SEGMENT4",'0')<=NVL(NVL("XSR"."SEGMENT4_TO","GCC"."SEGMENT4"),'0') AND
              NVL("GCC"."SEGMENT5",'0')>=NVL(NVL("XSR"."SEGMENT5_FROM","GCC"."SEGMENT5"),'0') AND
              NVL("GCC"."SEGMENT5",'0')<=NVL(NVL("XSR"."SEGMENT5_TO","GCC"."SEGMENT5"),'0') AND
              NVL("GCC"."SEGMENT6",'0')>=NVL(NVL("XSR"."SEGMENT6_FROM","GCC"."SEGMENT6"),'0') AND
              NVL("GCC"."SEGMENT6",'0')<=NVL(NVL("XSR"."SEGMENT6_TO","GCC"."SEGMENT6"),'0') AND
              NVL("GCC"."SEGMENT7",'0')>=NVL(NVL("XSR"."SEGMENT7_FROM","GCC"."SEGMENT7"),'0') AND
              NVL("GCC"."SEGMENT7",'0')<=NVL(NVL("XSR"."SEGMENT7_TO","GCC"."SEGMENT7"),'0') AND
              NVL("GCC"."SEGMENT8",'0')>=NVL(NVL("XSR"."SEGMENT8_FROM","GCC"."SEGMENT8"),'0') AND
              NVL("GCC"."SEGMENT8",'0')<=NVL(NVL("XSR"."SEGMENT8_TO","GCC"."SEGMENT8"),'0') AND
              NVL("GCC"."SEGMENT9",'0')>=NVL(NVL("XSR"."SEGMENT9_FROM","GCC"."SEGMENT9"),'0') AND
              NVL("GCC"."SEGMENT9",'0')<=NVL(NVL("XSR"."SEGMENT9_TO","GCC"."SEGMENT9"),'0')))
  16 - access("GL"."LEDGER_ID"=:9)
  18 - access("FDC"."CURRENCY_CODE"="GL"."CURRENCY_CODE")
  20 - access("XAL"."AE_HEADER_ID">=:7 AND "XAL"."ACCOUNTING_CLASS_CODE"='LIABILITY' AND "XAL"."AE_HEADER_ID"<=:8)
       filter("XAL"."ACCOUNTING_CLASS_CODE"='LIABILITY')
  22 - filter(("XAH"."UPG_BATCH_ID" IS NULL AND "XAH"."GROUP_ID"=:GROUP_ID AND "XAH"."LEDGER_ID"=:9 AND
              "XAH"."EVENT_TYPE_CODE"<>'MANUAL' AND INTERNAL_FUNCTION("XAH"."GL_TRANSFER_STATUS_CODE")))
  23 - access("XAH"."AE_HEADER_ID"="XAL"."AE_HEADER_ID" AND "XAH"."APPLICATION_ID"="XAL"."APPLICATION_ID")
       filter(("XAH"."AE_HEADER_ID">=:7 AND "XAH"."AE_HEADER_ID"<=:8))
  25 - filter("XAL"."APPLICATION_ID"="XDL"."APPLICATION_ID")
  26 - access("XAL"."AE_HEADER_ID"="XDL"."AE_HEADER_ID" AND "XAL"."AE_LINE_NUM"="XDL"."AE_LINE_NUM")
       filter(("XDL"."AE_HEADER_ID">=:7 AND "XDL"."AE_HEADER_ID"<=:8))
  27 - access("XAL"."CODE_COMBINATION_ID"="GCC"."CODE_COMBINATION_ID")
  28 - filter("GCC"."CHART_OF_ACCOUNTS_ID"=:COA_ID)
  30 - filter("XTD"."ENABLED_FLAG"='Y')
  31 - access("XTD"."LEDGER_ID"=:9)
  33 - access("XLO"."APPLICATION_ID"="XAH"."APPLICATION_ID" AND "XLO"."LEDGER_ID"=:9)
  34 - access("XAH"."APPLICATION_ID"="XET"."APPLICATION_ID" AND "XAH"."EVENT_TYPE_CODE"="XET"."EVENT_TYPE_CODE")
       filter("XET"."EVENT_TYPE_CODE"<>'MANUAL')




105 rows selected.


SQL>

From the sql-monitor html I can see that there is a data-skew where the optimizer thinks it will get 1 row from xla_ae_lines but in fact gets 7347 rows.

99% of the execution is happening in the hash join of the query (nr 4) where it is joining 13k rows with the table in nr 36.

I have tried to use some hints (no_use_hash, index) but have been unable to get better results.

Does anyone have any hint for me how to tackle this?

Sturla ThorLothar FlatzMartin Preissuser12131443
«13

Answers

  • User_6S0AY
    User_6S0AY Member Posts: 657 Blue Ribbon
    edited Apr 9, 2015 2:10PM

    Do you have tuning pack? If you do then run sql tuning advisor and see what the recommendation you get.

    Sturla Thoruser12131443
  • Saubhik
    Saubhik Member Posts: 5,803 Gold Crown
    edited Apr 9, 2015 2:19PM

    You have a very wrong cardinality estimate. It seems that statistics are not updated. Try to gather statistics and run this again. Also, you have a huge waiting event. Do you have any clue of those wait events?

  • Harmandeep Singh
    Harmandeep Singh Member Posts: 287
    edited Apr 10, 2015 3:16AM

    1. Firstly plan is not as expected. It should start from XLA_AE_HEADERS ( both functionally as well as technically), but it is starting from XLA_AE_LINES due to presence of custom index XLA_AE_LINES_C2_9529961

    WHERE xah.ae_header_id BETWEEN :7 AND :8

           AND xah.ledger_id = :9

           AND xah.group_id        = :group_id

    2. Also there seems missing index on column  xsr.definition_code, as that seems only entry point. Other columns have nvl funtion so that is not possible for entry into this table xsr

    Solution

    1. Disable the custom index XLA_AE_LINES_C2_9529961.

    2. As suggested, gather the stats on all tables at higher %age say 20%

    3. Also try creating index on defination_code if not already there. If there then ignore this. For this also check the Oracle support as it seems to me there might be some bug for table xsr as it is not having sufficient join conditions with other tables


    Regards,

    Harman


    Sturla Thor
  • 12cdb
    12cdb Member Posts: 2,971
    edited Apr 10, 2015 4:40AM

    I agree with Harmandeep on creating an index on xla_tb_def_seg_ranges(definition_code) and running the test again.

    >>36 |      TABLE ACCESS STORAGE FULL                  | XLA_TB_DEF_SEG_RANGES     |   754K|    75M|  3025   (2)| 00:00:37 |       |       |


    >> xtd.definition_code = xsr.definition_code


    Pradeep

    Sturla Thor
  • Lothar Flatz
    Lothar Flatz Member Posts: 681 Silver Badge
    edited Apr 10, 2015 5:02AM

    There are two major errors in cardinaliy.

    First is in operation 20. The condiation is a bit complex and might be difficult for the optimizer. Consider extended stats.I am not sure if they help.

    The second is in line 23. This time I do recommend column group stats for sure.

    If there is an index on the DEFINITION_CODE of table XLA_TB_SEQ_DEF_RANGES it should definitly be used given the estimate.

    Sturla Thor
  • Dom Brooks
    Dom Brooks Member Posts: 5,552 Silver Crown
    edited Apr 10, 2015 5:53AM

    Or dynamic sampling.

  • Martin Preiss
    Martin Preiss Member Posts: 2,381 Gold Trophy
    edited Apr 10, 2015 6:10AM

    this looks strange indeed. There are some wrong cardinalities in the plan - but the extreme duration comes from the HASH JOIN with its complex condition:

       4 - access("XTD"."DEFINITION_CODE"="XSR"."DEFINITION_CODE")  
           filter((NVL("GCC"."SEGMENT1",'0')>=NVL(NVL("XSR"."SEGMENT1_FROM","GCC"."SEGMENT1"),'0') AND  
                  NVL("GCC"."SEGMENT1",'0')<=NVL(NVL("XSR"."SEGMENT1_TO","GCC"."SEGMENT1"),'0') AND  
                  NVL("GCC"."SEGMENT2",'0')>=NVL(NVL("XSR"."SEGMENT2_FROM","GCC"."SEGMENT2"),'0') AND  
                  NVL("GCC"."SEGMENT2",'0')<=NVL(NVL("XSR"."SEGMENT2_TO","GCC"."SEGMENT2"),'0') AND  
                  NVL("GCC"."SEGMENT3",'0')>=NVL(NVL("XSR"."SEGMENT3_FROM","GCC"."SEGMENT3"),'0') AND  
                  NVL("GCC"."SEGMENT3",'0')<=NVL(NVL("XSR"."SEGMENT3_TO","GCC"."SEGMENT3"),'0') AND  
                  NVL("GCC"."SEGMENT4",'0')>=NVL(NVL("XSR"."SEGMENT4_FROM","GCC"."SEGMENT4"),'0') AND  
                  NVL("GCC"."SEGMENT4",'0')<=NVL(NVL("XSR"."SEGMENT4_TO","GCC"."SEGMENT4"),'0') AND  
                  NVL("GCC"."SEGMENT5",'0')>=NVL(NVL("XSR"."SEGMENT5_FROM","GCC"."SEGMENT5"),'0') AND  
                  NVL("GCC"."SEGMENT5",'0')<=NVL(NVL("XSR"."SEGMENT5_TO","GCC"."SEGMENT5"),'0') AND  
                  NVL("GCC"."SEGMENT6",'0')>=NVL(NVL("XSR"."SEGMENT6_FROM","GCC"."SEGMENT6"),'0') AND  
                  NVL("GCC"."SEGMENT6",'0')<=NVL(NVL("XSR"."SEGMENT6_TO","GCC"."SEGMENT6"),'0') AND  
                  NVL("GCC"."SEGMENT7",'0')>=NVL(NVL("XSR"."SEGMENT7_FROM","GCC"."SEGMENT7"),'0') AND  
                  NVL("GCC"."SEGMENT7",'0')<=NVL(NVL("XSR"."SEGMENT7_TO","GCC"."SEGMENT7"),'0') AND  
                  NVL("GCC"."SEGMENT8",'0')>=NVL(NVL("XSR"."SEGMENT8_FROM","GCC"."SEGMENT8"),'0') AND  
                  NVL("GCC"."SEGMENT8",'0')<=NVL(NVL("XSR"."SEGMENT8_TO","GCC"."SEGMENT8"),'0') AND  
                  NVL("GCC"."SEGMENT9",'0')>=NVL(NVL("XSR"."SEGMENT9_FROM","GCC"."SEGMENT9"),'0') AND  
                  NVL("GCC"."SEGMENT9",'0')<=NVL(NVL("XSR"."SEGMENT9_TO","GCC"."SEGMENT9"),'0'))) 
    

    Since the CBO expects the input from step 5 to be very small (0 or 1 row) and still does not choose an NESTED LOOPS join I guess there is some reason for this strategy. And I guess that the huge CPU usage comes from the evaluation of the step 4 filter. I did not try to understand what this condition actually means - but I think it's the reason for the performance problem.

    Sturla ThorLothar FlatzSturla Thor
  • Lothar Flatz
    Lothar Flatz Member Posts: 681 Silver Badge
    edited Apr 10, 2015 7:12AM
    Martin Preiss wrote:
    
    this looks strange indeed. There are some wrong cardinalities in the plan - but the extreme duration comes from the HASH JOIN with its complex condition:
    
       4 - access("XTD"."DEFINITION_CODE"="XSR"."DEFINITION_CODE")  
           filter((NVL("GCC"."SEGMENT1",'0')>=NVL(NVL("XSR"."SEGMENT1_FROM","GCC"."SEGMENT1"),'0') AND  
                  NVL("GCC"."SEGMENT1",'0')<=NVL(NVL("XSR"."SEGMENT1_TO","GCC"."SEGMENT1"),'0') AND  
                  NVL("GCC"."SEGMENT2",'0')>=NVL(NVL("XSR"."SEGMENT2_FROM","GCC"."SEGMENT2"),'0') AND  
                  NVL("GCC"."SEGMENT2",'0')<=NVL(NVL("XSR"."SEGMENT2_TO","GCC"."SEGMENT2"),'0') AND  
                  NVL("GCC"."SEGMENT3",'0')>=NVL(NVL("XSR"."SEGMENT3_FROM","GCC"."SEGMENT3"),'0') AND  
                  NVL("GCC"."SEGMENT3",'0')<=NVL(NVL("XSR"."SEGMENT3_TO","GCC"."SEGMENT3"),'0') AND  
                  NVL("GCC"."SEGMENT4",'0')>=NVL(NVL("XSR"."SEGMENT4_FROM","GCC"."SEGMENT4"),'0') AND  
                  NVL("GCC"."SEGMENT4",'0')<=NVL(NVL("XSR"."SEGMENT4_TO","GCC"."SEGMENT4"),'0') AND  
                  NVL("GCC"."SEGMENT5",'0')>=NVL(NVL("XSR"."SEGMENT5_FROM","GCC"."SEGMENT5"),'0') AND  
                  NVL("GCC"."SEGMENT5",'0')<=NVL(NVL("XSR"."SEGMENT5_TO","GCC"."SEGMENT5"),'0') AND  
                  NVL("GCC"."SEGMENT6",'0')>=NVL(NVL("XSR"."SEGMENT6_FROM","GCC"."SEGMENT6"),'0') AND  
                  NVL("GCC"."SEGMENT6",'0')<=NVL(NVL("XSR"."SEGMENT6_TO","GCC"."SEGMENT6"),'0') AND  
                  NVL("GCC"."SEGMENT7",'0')>=NVL(NVL("XSR"."SEGMENT7_FROM","GCC"."SEGMENT7"),'0') AND  
                  NVL("GCC"."SEGMENT7",'0')<=NVL(NVL("XSR"."SEGMENT7_TO","GCC"."SEGMENT7"),'0') AND  
                  NVL("GCC"."SEGMENT8",'0')>=NVL(NVL("XSR"."SEGMENT8_FROM","GCC"."SEGMENT8"),'0') AND  
                  NVL("GCC"."SEGMENT8",'0')<=NVL(NVL("XSR"."SEGMENT8_TO","GCC"."SEGMENT8"),'0') AND  
                  NVL("GCC"."SEGMENT9",'0')>=NVL(NVL("XSR"."SEGMENT9_FROM","GCC"."SEGMENT9"),'0') AND  
                  NVL("GCC"."SEGMENT9",'0')<=NVL(NVL("XSR"."SEGMENT9_TO","GCC"."SEGMENT9"),'0'))) 
    
    Since the CBO expects the input from step 5 to be very small (0 or 1 row) and still does not choose an NESTED LOOPS join I guess there is some reason for this strategy. And I guess that the huge CPU usage comes from the evaluation of the step 4 filter. I did not try to understand what this condition actually means - but I think it's the reason for the performance problem.
    

    Good point, indeed the cpu spend on the hash join is unexpected IMHO. I also would expect nested loops here.

    Sturla Thor
  • Sturla Thor
    Sturla Thor Member Posts: 14 Blue Ribbon
    edited Apr 10, 2015 7:23AM

    Hi,

    I had tried to but an index on the DEFINITION_CODE of table XLA_TB_SEQ_DEF_RANGES and the optimizer still chooses full table scan.

    I'm gathering stats on the tables with 20%.

    I've thought of dynamic sampling, it might be a way to let the optimizer know about the right cardinality.  I will try that to see if the optimizer will choose a different plan if it gets the right cardinality.

    However, most of the wait time is cpu activity in the hash join.  As Martin is pointing out that it might be because of the complexity in the filter in step 4.

    I tried to use the hint /*+ NO_USE_HASH (xsr) */ and it used nested loop instead, but still with long cpu time on the join.  I didn't however wait until it finished, just saw that the progress was approximately the same as with the hash join.

    SQL-tuning advisor came up with a new execution plan as a suggestion and new indexes as another suggestion:

    index.XLA.XLA_AE_LINES("ACCOUNTING_CLASS_CODE", "AE_HEADER_ID")XLA.XLA_AE_HEADERS("GROUP_ID", "AE_HEADER_ID")XLA.XLA_DISTRIBUTION_LINKS("AE_HEADER_ID", "APPLICATION_ID", "AE_LINE_NUM")

    I'm trying out the new execution plan now.

    Will post it shortly.

    Thanks all for you great comments! 

  • Martin Preiss
    Martin Preiss Member Posts: 2,381 Gold Trophy
    edited Apr 10, 2015 7:35AM

    just one additional detail: I would not recommend gathering statistics with a 20% sample - I don't think it hurts, but the auto_sample_size should get you a result of the same quality with a lower resource usage: http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/. If the quality of the statistics is not sufficient I would use a 100% sample (if that's still called sample). And if these statistics don't help the optimizer to create a decent plan I would use dynamic sampling (or maybe extended stats, histograms etc. in other cases). But I still don't think this is a statistics issue (though that's the correct answer for most SQL-performance related questions).

This discussion has been closed.