Skip to Main Content

Analytics Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Informatica Repository and Target database

832697Feb 7 2011 — edited Feb 7 2011
I have a scenario in which the repository database is DB1 and target database is DB2. How is that possible? I mean isnt the target database stored in the repository? Is it still alright?
This post has been answered by Navin Kumar Bolla on Feb 7 2011
Jump to Answer

Comments

unknown-951199

>Have you any idea why Oracle decided to ignore Dynamic Sampling?  Is this zero filtered sample card a clue?


Contrary to your assertions, I contend that something is different.

Dom Brooks

When I've seen the Not using dynamic sampling note there is sometimes an additional explanation as to why.

Could it be that in the level 4 block sample there were no rows which matched the single table predicates therefore the optimizer has rejected?

Update 1:

Just found an example in Jonathan's chapter 11 of Expert Oracle Practices which would support this as being a possibility.

Update 2:

And just found an old blog post of mine with observation of similar problem with similar excerpt from 10053:

https://orastory.wordpress.com/2012/06/19/flip-flop-hooray-ho/

Stefan Koehler

Hi Mohamed,

maybe a SQL Dynamic Sampling Services trace does provide more information about the root cause / issue. Just set the following event and re-run the CBO trace:

SQL> alter session set events 'trace[RDBMS.SQL_DS] disk=high';

You may also have noticed the documentation inconsistency about dynamic sampling level 4. The sample block count is 31 and not 64 - so Dom's suggestion may point in the right direction.

Regards

Stefan

Dom Brooks

Nice info on the DS trace, didn't know that.

Mohamed Houri

Hi Stefan

Thanks for the suggestion.

I will update this thread when done with DS trace

Best regards

Mohamed Houri

Randolf Geist

Mohamed Houri wrote:

Dears,

I have two identical databases PRODUCTION and COPY.

I have a query that is dramatically running slowly in PRODUCTION because of a wrong NESTED LOOP choice having a dramatic outer row source driving a huge amount of executions of the inner operation.

This wrong NESTED LOOP choice is due to a wrong estimation done by the CBO.

However, the same query in the COPY database has been honored with an execution plan using a HASH JOIN operation which has allowed the query to complete in an acceptable response time.

The database release is: TNS for Linux: Version 11.2.0.3.0 – Production

The statistics are exactly the same for the concerned tables and columns. Exactly the same.

The difference comes from the Estimation done when accessing the outer table.  In the COPY database the estimation is approximatively good while in PRODUCTION database the estimation is equal to the always suspicious cardinality 1.

After several checks I decided to get both 10053 trace files and here what I have isolated in those trace files:

 

While in PRODUCTION the Dynamic Sampling Failed without showing the reason of this failure. And consequently, I ended up by having that 1 Cardinality estimation which has produced the NESTED LOOP plan.

Have you any idea why Oracle decided to ignore Dynamic Sampling?  Is this zero filtered sample card a clue?


Best regards

Mohamed Houri

Hi Mohamed,

a couple of remarks:

1. If possible I would suggest - if you can reproduce at will - to enable SQL extended trace in addition to 10053 trace (or the trace suggested by Stefan) to see the exact details of the "failed" Dynamic Sampling (DS) query - this could mean that the recursive DS query ran into an ORA- error, which you should be able to tell from the SQL trace - not sure it shows up in the other traces.

2. There is a sanity check built into DS - if the number of blocks sampled is rather small and the "filtered sample card." is 0, the code can reject the DS results and ignore them. Your "COPY" trace excerpt shows "1" as result, whereas "PROD" shows 0 - so this might be an edge case where depending on the actual rows sampled the rejection happens or not. In such a case a higher sample level that uses more blocks might be required to get stable DS results.

3. Does "COPY and PRODUCTION are identical" mean these are true binary copies or was this copy made via Export / Import or similar that could change the *physical ordering of the rows*. This could also explain why you see different sample results.

4. You say "consequently I ended up by having 1 Cardinalty estimation", but a "failed" DS query in fact should mean that the DS results get ignored and the corresponding single table cardinality estimates are simply based on stored statistics instead of Dynamic Sampling. So is it correct that if you disable Dynamic Sampling you get that bad "0.72" rows estimate based on stats? If not, *this is clearly a bug*. If yes, have you checked for other possible workarounds like Extended Statistics to get better estimates based on stored statistics?

5. There are various bugs related to Dynamic Sampling, one that might apply in your situation is "Bug 17760686  Bad Cardinality estimation with dynamic sampling"

Randolf

Jonathan Lewis

Randolf Geist wrote:

Mohamed Houri wrote:

Dears,

I have two identical databases PRODUCTION and COPY.

I have a query that is dramatically running slowly in PRODUCTION because of a wrong NESTED LOOP choice having a dramatic outer row source driving a huge amount of executions of the inner operation.

This wrong NESTED LOOP choice is due to a wrong estimation done by the CBO.

However, the same query in the COPY database has been honored with an execution plan using a HASH JOIN operation which has allowed the query to complete in an acceptable response time.

The database release is: TNS for Linux: Version 11.2.0.3.0 – Production

The statistics are exactly the same for the concerned tables and columns. Exactly the same.

The difference comes from the Estimation done when accessing the outer table.  In the COPY database the estimation is approximatively good while in PRODUCTION database the estimation is equal to the always suspicious cardinality 1.

After several checks I decided to get both 10053 trace files and here what I have isolated in those trace files:

While in PRODUCTION the Dynamic Sampling Failed without showing the reason of this failure. And consequently, I ended up by having that 1 Cardinality estimation which has produced the NESTED LOOP plan.

Have you any idea why Oracle decided to ignore Dynamic Sampling?  Is this zero filtered sample card a clue?

Best regards

Mohamed Houri

2. There is a sanity check built into DS - if the number of blocks sampled is rather small and the "filtered sample card." is 0, the code can reject the DS results and ignore them. Your "COPY" trace excerpt shows "1" as result, whereas "PROD" shows 0 - so this might be an edge case where depending on the actual rows sampled the rejection happens or not. In such a case a higher sample level that uses more blocks might be required to get stable DS results.

3. Does "COPY and PRODUCTION are identical" mean these are true binary copies or was this copy made via Export / Import or similar that could change the *physical ordering of the rows*. This could also explain why you see different sample results.

Randolf,

Agree with the "sample is zero - use the ordinary stats" suggestion, and we're looking at 31 blocks out of 6.5M so it's easy to get unlucky.  A dynamic_sampling hint for the table at level 4 would double the sample size - or a cursor/system setting of 5:  might get luckier.

A curiosity I haven't checked - I think it might be possible for identical clones to get different results because there's a random component to the sampling - though the latest versions of Oracle might be using the seed() option with the sample clause to avoid this possibility.

Regards

Jonathan Lewis

Mohamed Houri

Hi Randolf,

In PROD the cardinality 1 is the result of the CBO using the available statistics.

In the COPY database if I alter the session to not use dynamic sampling than I will get the same behavior as in PRODUCTION (I have tested with the original query as well and I’ve got the dramatic NESTED LOOP plan when DS =0) as shown below:

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

SQL> explain plan for

  2  select count(1)

  3  froM TABLEXXX

  4  WHERE TABLEXXX.a_type      = ‘aaaaaa’

  5  AND TABLEXXX.n_id         = 159947

  6  AND TABLEXXX.t_name       = ‘bbbbbbb’

  7  AND TABLEXXX.a_method      = ‘aaaaaa’

  8  AND TABLEXXX.a_status NOT IN ('CANCELED')

  9  ;

Explained.

SQL> select * from table(dbms_xplan.display);

Plan hash value: 1466985234

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |     1 | 35 | 27811   (1)| 00:05:34 |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        |   263K| 8990K| 27811   (1)| 00:05:34 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   | 1733K|       |  1448 (1)| 00:00:18 |

------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("TABLEXXX"."T_NAME"=‘bbbbbbb’ AND

"TABLEXXX"."A_TYPE"=‘aaaaaa’ AND

"TABLEXXX"."A_METHOD"=‘aaaaaa’ AND

"TABLEXXX"."A_STATUS"<>'CANCELED')

   3 - access("TABLEXXX"."N_ID"=159947)

Note

-----

   - dynamic sampling used for this statement (level=4)

SQL> alter session set optimizer_dynamic_sampling=0;

Session altered.

SQL> explain plan for

  2  select count(1)

  3  froM TABLEXXX

  4  WHERE TABLEXXX.a_type      = ‘aaaaaa’

  5  AND TABLEXXX.n_id         = 159947

  6  AND TABLEXXX.t_name       = ‘bbbbbbb’

  7  AND TABLEXXX.a_method      = ‘aaaaaa’

  8  AND TABLEXXX.a_status NOT IN ('CANCELED')

  9  ;

  1. Explained.

SQL> select * from table(dbms_xplan.display);

Plan hash value: 1466985234

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |     1 |    35 | 27811   (1)| 00:05:34 |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        |     1 | 35 | 27811   (1)| 00:05:34 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   | 1733K|       |  1448 (1)| 00:00:18 |

------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("TABLEXXX"."T_NAME"=‘bbbbbbb’ AND

"TABLEXXX"."A_TYPE"=‘aaaaaa’ AND

"TABLEXXX"."A_METHOD"=‘aaaaaa’ AND

"TABLEXXX"."A_STATUS"<>'CANCELED')

   3 - access("TABLEXXX"."N_ID"=159947)

 

As per regards to Errors in Dynamic Sampling, yes they can be shown in the 10053 trace file. I have already mentioned it in this https://jonathanlewis.wordpress.com/2014/11/26/lunchtime-quiz-2/ (see comments n°11)

The COPY database is refreshed by PROD data using RMAN

And I think you are right that higher sample level will make the DS kicking in as shown via the following example done in the PROD database

-- From PROD database

alter session set optimizer_dynamic_sampling=4;

Plan hash value: 1466985234

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |     1 | 35 | 27811   (1)| 00:05:34 |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        |    1  |    35 | 27811   (1)| 00:05:34 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   | 1733K|       |  1448 (1)| 00:00:18 |

------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("TABLEXXX"."T_NAME"=‘bbbbbbb’ AND

"TABLEXXX"."A_TYPE"=‘aaaaaa’ AND

"TABLEXXX"."A_METHOD"=‘aaaaaa’ AND

"TABLEXXX"."A_STATUS"<>'CANCELED')

   3 - access("TABLEXXX"."N_ID"=159947)

--> no Note about dynamic sampling

SQL> alter session set optimizer_dynamic_sampling=6;

Plan hash value: 1466985234

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |     1 | 35 | 27811   (1)| 00:05:34 |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        | 1560K|    52M| 27811   (1)| 00:05:34 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   | 1733K|       |  1448 (1)| 00:00:18 |

------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

 

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("TABLEXXX"."T_NAME"=‘bbbbbbb’ AND

"TABLEXXX"."A_TYPE"=‘aaaaaa’ AND

"TABLEXXX"."A_METHOD"=‘aaaaaa’ AND

"TABLEXXX"."A_STATUS"<>'CANCELED')

   3 - access("TABLEXXX"."N_ID"=159947)

Note

-----

   - dynamic sampling used for this statement (level=6)

Best regards

Mohamed Houri

Randolf Geist

Mohamed Houri wrote:

In PROD the cardinality 1 is the result of the CBO using the available statistics.

In the COPY database if I alter the session to not use dynamic sampling than I will get the same behavior as in PRODUCTION (I have tested with the original query as well and I’ve got the dramatic NESTED LOOP plan when DS =0) as shown below:

As per regards to Errors in Dynamic Sampling, yes they can be shown in the 10053 trace file. I have already mentioned it in this https://jonathanlewis.wordpress.com/2014/11/26/lunchtime-quiz-2/ (see comments n°11)

The COPY database is refreshed by PROD data using RMAN

And I think you are right that higher sample level will make the DS kicking in as shown via the following example done in the PROD database

Hi Mohamed,

regarding the error: In most recent versions (12c and backported to 11.2.0.4) the new Dynamic Sampling code gets used, and one unfortunate side effect of this is that far less information in the default 10053 trace is shown about Dynamic Sampling. You're right about the output and the error shown in previous versions (and your version 11.2.0.3 should not exhibit the new behaviour), but that's the reason I suggested this additional 10046 trace, because I don't think it would show up in the most recent versions. I haven't had realized that you are still on 11.2.0.3 when I wrote my reply above, and I haven't checked yet whether the trace level suggested by Stefan shows the errors or not.

Looking at your sample queries, have you checked if you can't get improved stats based estimates using extended statistics, or do you already have a corresponding column group defined? At first glance it looks like these predicates should be suitable for extended stats..

Randolf

Mohamed Houri

Hi Randolf,

I have asked to local DBA to provide me the 10046 trace file and here below are the important lines (I am still waiting for the Dynamic Sampling trace file)

PARSING IN CURSOR #140153272422968 len=635 dep=1 uid=0 oct=3 lid=0 tim=1433412864791141 hv=2799649739 ad='6731c4b68' sqlid='83n3mz2mdyhyb'

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')

NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */

              NVL(SUM(C1),0), NVL(SUM(C2),0)

              FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("TABLEXXX")

                               FULL("TABLEXXX") NO_PARALLEL_INDEX("TABLEXXX") */

                         1 AS C1

                       , CASE WHEN ….

THEN 1 ELSE 0 END AS C2

              FROM "XXXXX"."TABLEXXX" SAMPLE BLOCK (0.000482 , 1) SEED (1) "TABLEXXX") SAMPLESUB

END OF STMT

PARSE #140153272422968:c=1000,e=805,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=1433412864791140

BINDS #140153272422968:

Bind#0

  oacdty=01 mxl=32(19) mxlc=00 mal=00 scl=00 pre=00

  oacflg=10 fl2=0001 frm=01 csi=31 siz=152 off=0

  kxsbbbfp=7f77f9fbbf68  bln=32 avl=19  flg=05

  value="XXXXXXX"

Bind#1

  oacdty=02 mxl=22(04) mxlc=00 mal=00 scl=00 pre=00

  oacflg=10 fl2=0001 frm=00 csi=00 siz=0 off=32

kxsbbbfp=7f77f9fbbf88  bln=22  avl=04 flg=01

  value=159947

Bind#2

  oacdty=01 mxl=32(24) mxlc=00 mal=00 scl=00 pre=00

  oacflg=10 fl2=0001 frm=01 csi=31 siz=0 off=56

kxsbbbfp=7f77f9fbbfa0  bln=32  avl=24 flg=01

  value=" XXXXXXX "

Bind#3

  oacdty=01 mxl=32(19) mxlc=00 mal=00 scl=00 pre=00

  oacflg=10 fl2=0001 frm=01 csi=31 siz=0 off=88

kxsbbbfp=7f77f9fbbfc0  bln=32  avl=19 flg=01

  value=" XXXXXXX "

Bind#4

  oacdty=01 mxl=32(08) mxlc=00 mal=00 scl=00 pre=00

  oacflg=10 fl2=0001 frm=01 csi=31 siz=0 off=120

kxsbbbfp=7f77f9fbbfe0  bln=32  avl=08 flg=01

value="CANCELED"

EXEC #140153272422968:c=1999,e=1476,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=3237502819,tim=1433412864792685

FETCH #140153272422968:c=102985,e=103550,p=0,cr=169,cu=3,mis=0,r=1,dep=1,og=1,plh=3237502819,tim=1433412864896251

STAT #140153272422968 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=169 pr=0 pw=0 time=103554 us)'

STAT #140153272422968 id=2 cnt=1151 pid=1 pos=1 obj=51423 op='TABLE ACCESS SAMPLE TABLEXXX (cr=169 pr=0 pw=0 time=753 us cost=10 size=49867 card=1061)'

CLOSE #140153272422968:c=0,e=7,dep=1,type=0,tim=1433412864896479

A TKPROF of what precedes gives this:

call     count    cpu    elapsed       disk query    current        rows

------- ------ -------- ---------- ---------- ---------- ----------  ----------

Parse   1      0.00       0.00          0          0          0           0

Execute 1      0.00       0.00          0          0          0           0

Fetch   1      0.10       0.10          0        169          3           1

------- ------ -------- ---------- ---------- ---------- ----------  ----------

total    3      0.10       0.10          0        169          3           1

Misses in library cache during parse: 1

Misses in library cache during execute: 1

Optimizer mode: ALL_ROWS

Parsing user id: SYS (recursive depth: 1)

Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation

---------- ---------- ---------- ---------------------------------------------------

         1          1          1 SORT AGGREGATE (cr=169 pr=0 pw=0 time=103554 us)

      1151       1151       1151 TABLE ACCESS SAMPLE TABLEXXX (cr=169 pr=0 pw=0 time=753 us cost=10 size=49867 card=1061)

********************************************************************************

And this the part of the TKPROF concerning the query

Rows (1st) Rows (avg) Rows (max)  Row Source Operation

---------- ---------- ---------- ---------------------------------------------------

         1          1          1 SORT AGGREGATE (cr=576618 pr=46538 pw=0 time=332816270 us)

   1531009    1531009 1531009   TABLE ACCESS BY INDEX ROWID TABLEXXX (cr=576618 pr=46538 pw=0 time=285322375 us cost=27811 size=35 card=1)

   1647583    1647583 1647583    INDEX RANGE SCAN IDX_TABLEXXX1 (cr=6427 pr=6236 pw=0 time=37642407 us cost=1448 size=0 card=1733553)(object id 52020)

Unless I have missed something it seems to me that there is no indication about the failure of Dynamic Sampling in this 10046 trace file.

As per regards to the extended stats, I have  another TEST database (which is not an exact copy of PROD but which is showing the same symptoms) where I have tested the extended stats. In this particular query I have a predicate with 5 clauses

SQL> explain plan for

  2  select count(1)

  3  froM TABLEXXX

  4  WHERE TABLEXXX.a_type      = ‘aaaaaa’

  5  AND TABLEXXX.n_id         = 159947

  6  AND TABLEXXX.t_name       = ‘bbbbbbb’

  7  AND TABLEXXX.a_method      = ‘aaaaaa’

  8  AND TABLEXXX.a_status NOT IN ('CANCELED')

  9  ;

The extended stats work only with equality predicate this is why I’ve considered only the first 4 predicate columns to be in a column group extension. There is also another difficulty here because all the above columns have histograms

SQL> select column_name, histogram

  2  from all_tab_col_statistics

  3  where table_name = ‘TABLEXXX’

  4  and column_name in ('A_TYPE'

  5                     ,'N_ID'

  6                     ,'T_NAME'

  7                     ,'A_METHOD'

  8                     );

COLUMN_NAME          HISTOGRAM

------------------- ---------------

A_TYPE               FREQUENCY

N_ID                 HEIGHT BALANCED

A_METHOD             FREQUENCY

T_NAME               FREQUENCY

And there is no guaranty that the resulting column group extension will have histogram as I have shown in this article. And if the column group extension has no histogram while its underlying columns have histogram then the extension will not be used by the CBO as shown in the same article. And indeed when I have created an extension on the above first 4 columns and collected again statistics here below what I’ve got (after requiring and re-collecting stats with default parameters)

SQL> select column_name, histogram

  2  from all_tab_col_statistics

  3  where table_name = ‘TABLEXXX’

  4  and column_name in ('A_TYPE'

  5                     ,'N_ID'

  6                     ,'T_NAME'

  7                     ,'A_METHOD'

  8                     ,'SYS_STUBOE8ADSK9A5YTY$6IKX#5CG'

  9                     );

COLUMN_NAME                     HISTOGRAM

------------------------------ ---------------

SYS_STUBOE8ADSK9A5YTY$6IKX#5CG NONE

A_TYPE                         FREQUENCY

N_ID                           HEIGHT BALANCED

A_METHOD                       FREQUENCY

T_NAME                         FREQUENCY

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |       | | 32689 (100)|          |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        |     1 |    35 | 32689 (1)| 00:06:33 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   |  1744K| |  1400   (1)| 00:00:17 |

------------------------------------------------------------------------------------------------

And when I tried forcing the histogram for the column group extension

SQL> begin

  2 

  3  dbms_stats.gather_table_stats

  4             (user

  5             ,’TABLEXXX’

  6             ,method_opt    => 'for columns SYS_STUBOE8ADSK9A5YTY$6IKX#5CG size skewonly'

  7             ,cascade       => true

  8             ,no_invalidate => false

  9             );

10  end;

11  /

PL/SQL procedure successfully completed.

Elapsed: 00:36:16.02

SQL> select column_name, histogram

  2  from all_tab_col_statistics

  3  where table_name = ‘TABLEXXX’

  4  and column_name in ('A_TYPE'

  5                     ,'N_ID'

  6                     ,'T_NAME'

  7                     ,'A_METHOD'

  8                     ,'SYS_STUBOE8ADSK9A5YTY$6IKX#5CG'

  9                     );

COLUMN_NAME                     HISTOGRAM

------------------------------ ---------------

SYS_STUBOE8ADSK9A5YTY$6IKX#5CG HEIGHT BALANCED

A_TYPE                         FREQUENCY

N_ID                           HEIGHT BALANCED

A_METHOD                       FREQUENCY

T_NAME                         FREQUENCY

------------------------------------------------------------------------------------------------

| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |                 |       | | 32342 (100)|          |

|   1 |  SORT AGGREGATE              |                 |     1 | 35 |            |          |

|*  2 |   TABLE ACCESS BY INDEX ROWID| TABLEXXX        |  3841 |   131K| 32342 (1)| 00:06:29 |

|*  3 |    INDEX RANGE SCAN          | IDX_TABLEXXX1   |  1683K| |  1339   (1)| 00:00:17 |

------------------------------------------------------------------------------------------------

Column (#42): SYS_STUBOE8ADSK9A5YTY$6IKX#5CG(

    AvgLen: 12 NDV: 4759 Nulls: 0 Density: 0.000018 Min: 98748394 Max: 9868407051

    Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 34

ColGroup Usage:: PredCnt: 4  Matches Full: #1 Partial:  Sel: 0.0000

  Table: BO_MESSAGE  Alias: TABLEXXX

    Card: Original: 213774610.000000  Rounded: 3841  Computed: 3841.15  Non Adjusted: 3841.15

The extension has been used but it is still not precise (but of course better than with Dynamic Sampling at level 4 which has been ignored)

Anyway, I have now up the dynamic sampling to level 6 and things are going well

Best regards

Mohamed Houri

Jonathan Lewis

Mohammed,

You don't have to have an error - all you need (in your case) is that nvl(sum(c2),0) = 0 for Oracle to ignore the sample.

Regards

Jonathan Lewis

Mohamed Houri

Thanks Jonathan for this precision

Best regards

Mohamed Houri

1 - 12
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Mar 7 2011
Added on Feb 7 2011
3 comments
204 views