Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

11.2.0.4 DB rewrites query rendering remote index useless in distributed SQL

cj.travisMar 19 2014 — edited Mar 20 2014

Background:

Since upgrading from 10.2.0.4 (SPARC, Solaris 10) to 11.2.0.4 (SPARC, Solaris 11) in November 2013, we've noticed degradation when going across database links for many of our nightly ETLs.  We have decommissioned the 10G database but still leave it running until the hardware reaches its impending end of life.

Just recently I decided to run the trace the sessions and run the ETL in the 10 and 11g environments and review the TKPROF output.  On the old system, the ETL ran in about 10 seconds.  On the new system, it regularly runs around 10-12 minutes.

Here is the now "problematic" SQL taken straight from our ETL procedure:

         INSERT /*+ APPEND NOLOGGING PARALLEL(T,4) */

         INTO STG.STG_PS_FG_SO_LINE_AT T

         SELECT

               AUDIT_OPRID,AUDIT_STAMP,AUDIT_ACTN,BUSINESS_UNIT,SO_ID,SO_LINE,ACTIVITY_CODE,SOACTIVITY_STATUS,SOACT_PRIORITY,PRICE,FGAS_SO_ORIG_PRICE,FGAS_PRC_REASON               _CD,FGAS_QTY_ORDERED,FGAS_TTL_TAX_RATE,FGAS_TAX_PER_GAL

         FROM SYSADM.PS_FG_SO_LINE_AT @ REMOTE_CRM S

         WHERE 1=1

           AND S.AUDIT_ACTN = 'A' 

           AND S.AUDIT_STAMP > TRUNC(SYSDATE) -2;

As you can see this is an incremental load where we truncate and reload 2 days worth of data.  The index on the remote database is on AUDIT_STAMP and AUDIT_ACTN.

Here is a the explain plan of the 10g (good performance) from the traced session/TKPROF:

Plan hash value: 3186031151

---------------------------------------------------------------------------------------------------------

| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |

---------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT REMOTE     |                  |   538K|    40M|  3190   (1)| 00:00:39 |        |

|   1 |  TABLE ACCESS BY INDEX ROWID| PS_FG_SO_LINE_AT |   538K|    40M|  3190   (1)| 00:00:39 |  CRPRD |

|*  2 |   INDEX RANGE SCAN          | PSAFG_SO_LINE_AT | 96987 |       |   219   (0)| 00:00:03 |  CRPRD |

---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - access("A1"."AUDIT_STAMP">TRUNC(TO_NUMBER(:1))-2 AND "A1"."AUDIT_ACTN"='A' AND

              "A1"."AUDIT_STAMP" IS NOT NULL)

       filter("A1"."AUDIT_ACTN"='A')

And here is the explain on the 11g (bad performance) taken from TKPROF:

Plan hash value: 2117389654

---------------------------------------------------------------------------------------------------------

| Id  | Operation                   | Name             | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |

---------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT REMOTE     |                  |   538K|    40M| 27297   (1)| 00:05:28 |        |

|   1 |  TABLE ACCESS BY INDEX ROWID| PS_FG_SO_LINE_AT |   538K|    40M| 27297   (1)| 00:05:28 |  CRPRD |

|*  2 |   INDEX SKIP SCAN           | PSAFG_SO_LINE_AT | 96987 |       | 24326   (1)| 00:04:52 |  CRPRD |

---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - access("A1"."AUDIT_ACTN"='A')

       filter("A1"."AUDIT_ACTN"='A' AND SYS_EXTRACT_UTC(INTERNAL_FUNCTION("A1"."AUDIT_STAMP"))>SY

              S_EXTRACT_UTC(CAST(TRUNC(SYSDATE@!)-2 AS TIMESTAMP WITH TIME ZONE)))

What stands out is that Oracle rewrote the query like this (extracted from TKPROF) and the optimizer on 10g allows for an INDEX RANGE SCAN and on 11g it becomes an INDEX SKIP SCAN:

SELECT /*+ OPAQUE_TRANSFORM */

      "AUDIT_OPRID",

       "AUDIT_STAMP",

       "AUDIT_ACTN",

       "BUSINESS_UNIT",

       "SO_ID",

       "SO_LINE",

       "ACTIVITY_CODE",

       "SOACTIVITY_STATUS",

       "SOACT_PRIORITY",

       "PRICE",

       "FGAS_SO_ORIG_PRICE",

       "FGAS_PRC_REASON_CD",

       "FGAS_QTY_ORDERED",

       "FGAS_TTL_TAX_RATE",

       "FGAS_TAX_PER_GAL"

  FROM "SYSADM"."PS_FG_SO_LINE_AT"@REMOTE_CRM "S"

WHERE     "AUDIT_ACTN" = 'A'

       AND "AUDIT_STAMP" >

              CAST (TRUNC (SYSDATE) - 2 AS TIMESTAMP WITH TIME ZONE);

Because the database casted the TRUNC(SYSDATE) as a TIMESTAMP WITH TIME ZONE, the remote index is not being used.  I've never seen this occur before.  Is there a way to hint or instruct the optimizer not to do this?

Thanks in advance for the help.

Comments

AndrewSayer

What do you mean it looks okay?

Explain plan doesn't tell you the real plan that will be used, you should always querythe execution plan directly from the cursor cache (use dbms_xplan.display_cursor).

You are doing several full table scans and sorting the results. How long did you think it would take? How long does it take? What is the session actually doing, is it waiting on locks, is it doing work? Check event from v$session.

John Thorton

user13328581 wrote:

see result below.

Plan hash value: 2863189736

------------------------------------------------------------------------------------------------

| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | UPDATE STATEMENT          |                    |     1 |   231 |   160   (6)| 00:00:02 |

|   1 |  UPDATE                   | GS_TABLE         |       |       |            |          |

|*  2 |   HASH JOIN SEMI          |                    |     1 |   231 |   130   (0)| 00:00:02 |

|*  3 |    TABLE ACCESS FULL      | GS_TABLE         |     5 |   895 |   123   (0)| 00:00:02 |

|*  4 |    TABLE ACCESS FULL      | UPDATEDPROGRAMCODE |   850 | 44200 |     7   (0)| 00:00:01 |

|*  5 |   VIEW                    |                    |    11 |  2024 |     8  (13)| 00:00:01 |

|*  6 |    WINDOW SORT PUSHED RANK|                    |    11 |   440 |     8  (13)| 00:00:01 |

|*  7 |     FILTER                |                    |       |       |            |          |

|*  8 |      TABLE ACCESS FULL    | UPDATEDPROGRAMCODE |    11 |   440 |     7   (0)| 00:00:01 |

|   9 |   VIEW                    |                    |   850 |  1138K|     9  (23)| 00:00:01 |

|  10 |    SORT ORDER BY          |                    |   850 |   685K|     9  (23)| 00:00:01 |

|* 11 |     VIEW                  |                    |   850 |   685K|     8  (13)| 00:00:01 |

|  12 |      WINDOW SORT          |                    |   850 | 47600 |     8  (13)| 00:00:01 |

|* 13 |       FILTER              |                    |       |       |            |          |

|* 14 |        TABLE ACCESS FULL  | UPDATEDPROGRAMCODE |   850 | 47600 |     7   (0)| 00:00:01 |

------------------------------------------------------------------------------------------------

According to the post PLAN, the query should take 2 seconds to complete & accessed fewer than 1000 rows total.

How do these values compare with reality?

please quantify "a bit of time"

Jonathan Lewis

It looks like your update is using an existence subquery to identify rows, then updating two columns with individual subqueries that need to do a tablescan to find a few hundred rows and do some sorting for each row updated.

You don't show the predicate section of the plan so we have no idea how many things in the predicate section may be making the optimizer come up with very bad estimates, so:

  The plan thinks that there will be only one row to update - check a select with existence, is this close ?

   How much work does Oracle do for a typical example value for each of the subqueries - Oracle's estimate is that it will take in the order of a couple of seconds per row updated (so if you're updating a couple of thousand rows you could be waiting for 20 minutes) if Oracle is close to correct in its subquery prediction.

Regards

Jonathan Lewis

user13328581

In reality, it is taking more than 1 hour...

Jonathan: I could try converting it with a select with existence,,,also how do i show the predicate section of the plan in sql developer..

Pavan Kumar
Pavan Kumar

Hi Jonathan,

------------------------------------------------------------------------------------------------

| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | UPDATE STATEMENT          |                        |     1 |   231 |   160   (6)| 00:00:02 |

|   1 |  UPDATE                   | GS_TABLE             |       |       |            |          |

|*  2 |   HASH JOIN SEMI          |                        |     1 |   231 |   130   (0)| 00:00:02 |

|*  3 |    TABLE ACCESS FULL      | GS_TABLE             |     5 |   895 |   123   (0)| 00:00:02 |

|*  4 |    TABLE ACCESS FULL      | UPDATEDPROGRAMCODE     |   850 | 44200 |     7   (0)| 00:00:01 |

Hope fully the update gs_table statement is joined with UPDATEDPROGRAMCODE (850 rows) - trying to fetch topped rank with range/window ..performing the hash semi (probably correlated query to get lookup or summed value) with below (self referential for same table)

|*  5 |   VIEW                |               | 11 |  2024 | 8  (13)| 00:00:01 |
|*  6 | WINDOW SORT PUSHED RANK|               | 11 |   440 | 8  (13)| 00:00:01 |
|*  7 | FILTER            |               |   |   |        |      |
|*  8 |  TABLE ACCESS FULL | UPDATEDPROGRAMCODE     | 11 |   440 | 7   (0)| 00:00:01 |

and finally making sure hash join semi (not exists with some sort of business exclusion condition (not exists)

|   9 |   VIEW                |               |   850 |  1138K| 9  (23)| 00:00:01 |
|  10 | SORT ORDER BY      |               |   850 |   685K| 9  (23)| 00:00:01 |
|* 11 | VIEW              |               |   850 |   685K| 8  (13)| 00:00:01 |
|  12 |  WINDOW SORT      |               |   850 | 47600 | 8  (13)| 00:00:01 |
|* 13 |   FILTER          |               |   |   |        |      |
|* 14 |    TABLE ACCESS FULL  | UPDATEDPROGRAMCODE     |   850 | 47600 | 7   (0)| 00:00:01 |

How about rewriting as,

1. Initial elimination the rows which aren;t needed (if we have index joning condition only index lookup will happen), so we have subset of require data

2. Update the statement with index - resultset which would be small fit into to PGA, then getting summary of group of required value

Please correct me

- Pavan Kumar N

AndrewSayer

user13328581 wrote:

In reality, it is taking more than 1 hour...

Jonathan: I could try converting it with a select with existence,,,also how do i show the predicate section of the plan in sql developer..

Use SQL*Plus to display the execution plan in a postable format:

Do:

explain plan for <update statement>;

select * from table(dbms_xplan.display);

Post the update statement if you want help rewriting it. It can almost definitely be rewritten so that UPDATEDPROGRAMCODE and GS_TABLE are scanned only once with the result used to update your table. You'll probably find it helpful to write a query that will generate the new versions of the rows for the table and use a MERGE statement.

Jonathan Lewis

Pavan Kumar N,

------------------------------------------------------------------------------------------------

| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | UPDATE STATEMENT          |                    |     1 |   231 |   160   (6)| 00:00:02 |

|   1 |  UPDATE                   | GS_TABLE           |       |       |            |          |

|*  2 |   HASH JOIN SEMI          |                    |     1 |   231 |   130   (0)| 00:00:02 |

|*  3 |    TABLE ACCESS FULL      | GS_TABLE           |     5 |   895 |   123   (0)| 00:00:02 |

|*  4 |    TABLE ACCESS FULL      | UPDATEDPROGRAMCODE |   850 | 44200 |     7   (0)| 00:00:01 |

|*  5 |   VIEW                    |                    |    11 |  2024 |     8  (13)| 00:00:01 |

|*  6 |    WINDOW SORT PUSHED RANK|                    |    11 |   440 |     8  (13)| 00:00:01 |

|*  7 |     FILTER                |                    |       |       |            |          |

|*  8 |      TABLE ACCESS FULL    | UPDATEDPROGRAMCODE |    11 |   440 |     7   (0)| 00:00:01 |

|   9 |   VIEW                    |                    |   850 |  1138K|     9  (23)| 00:00:01 |

|  10 |    SORT ORDER BY          |                    |   850 |   685K|     9  (23)| 00:00:01 |

|* 11 |     VIEW                  |                    |   850 |   685K|     8  (13)| 00:00:01 |

|  12 |      WINDOW SORT          |                    |   850 | 47600 |     8  (13)| 00:00:01 |

|* 13 |       FILTER              |                    |       |       |            |          |

|* 14 |        TABLE ACCESS FULL  | UPDATEDPROGRAMCODE |   850 | 47600 |     7   (0)| 00:00:01 |

------------------------------------------------------------------------------------------------


When formatted in fixed font the plan LOOKS LIKE the plan for a query of the form:

update gs_table

set

        column1 = (correlated subquery against updatedprogramcode including analytic functions),

        column2 = (correlated subquery againsts updatedprogramcode including analytic functions)

where

        {some predicates}

and exists (select correlated from updatedprogramcode)

However, the OP's reply to my posting suggests that my guess was wrong since (s)he has said they "could try converting it with a select with existenc" and hasn't answered any of the questions I asked about where the work happens and how many rows are involved. Since I've had no response to my requests for information I'm not going to speculate on where the time goes or how the performance could be improved.

Regards

Jonathan Lewis

P.S.  I suspect the plan also suggests that the version is 12c, but I haven't thought too carefully about that.

Jonathan Lewis

How to get the plan with predicates probably depends on the version of SQL*Developer - it's not a tool I use regularly. Whatever else you do the two key points that benefit the forum (and you) are to use dbms_xplan and to get a text output.

For basic usage you should be able to execute two queries from the SQL screen:

explain plan for

{your SQL statement}

select * from table(dbms_xplan.display);

If the last couple of lines of the output are a note about "using old version of PLAN_TABLE" you will need to "drop table PLAN_TABLE;" (but make sure you get permission from the owner of the database (or schema) before you do that.

As an alternative you could run the query, then do:

select sql_id, child_number, sql_text from V$sql where sql_text like '%{some identifiable bit of your SQL}%';

When you can identify the row in v$sql that is your query you can do:

select * from table(dbms_xplan.display_cursor('{the sql_id reported}', {the child_number reported}));

If you want to see where the work goes and how much data is involved you can use the SQL_Developer autotrace feature, but at present this will only give you a graphic display which doesn't display well in the forum - however the STARTS column and the A-Rows column tell you about the workload line by line, and will allow you to answer my question about how many rows identified as in need of update.

Regards

Jonathan Lewis

kulikouski

Hi,

You could try rewrite UPDATE to MERGE, I believe it will faster.

KR

Jonathan Lewis

It might be faster - but you don't know whether the time is spent on identifying the data to be updated or on performing the subqueries that calculate new values; and since it appears that there are two (or more) columns being updated by two significantly differing subqueries that involve analytic functions it might not be possible to create an efficient USING subquery for a merge; so it might be better to wait to see what the statement looks like before guessing how best to change it.

Regards

Jonathan Lewis

Chris Hunt

Can you move your monitor closer to the window? I'm having difficulty seeing your update statement.

user13328581

Hi the version is 12c. After further digging, I realized the reason why the update statement is slow is due to the number of triggers created on the required updated table.

1 - 13
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Apr 17 2014
Added on Mar 19 2014
3 comments
446 views