This discussion is archived
13 Replies Latest reply: Dec 12, 2008 6:14 AM by Repos RSS

DBMS STATS

Repos Newbie
Currently Being Moderated
Hi
I Am running dbms stats job to gather stats for one table which hase one partition, In this table after the partition exchange
i rebuild the index , then i run the job for stats, but stats take 3 hours to complete , is thete is any way i can tune this tsts gathering
i need to run this every day

The following are the steps

1) partition exchange
2) rebuild index
3) gather stats

how i can tune this gather stats , following is the syntax i am using

EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SIM','SENDUNG',METHOD_OPT => 'FOR ALL COLUMNS
SIZE 1', DEGREE => 5 ,ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,GRANULARITY => 'ALL',cascade=> FALSE);

rds
  • 1. Re: DBMS STATS
    SatishKandi Guru
    Currently Being Moderated
    Try changing MBRC and workarea size policy.
  • 2. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    HI Satish
    Workarea size policy alrady in auto mode
    i am using Oracle 10 g
    mbrc is 32
    is there is another paramater need to set
    rds
  • 3. Re: DBMS STATS
    SatishKandi Guru
    Currently Being Moderated
    You can override these settings at session level to good effect but testing is needed.
  • 4. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    HI Satish
    Workarea size policy alrady in auto mode
    i am using Oracle 10 g
    where i can look for mbrc is this is same as db_file_multiblock_read_count
    i check in the view sys.aux_stats$ mbrc is null
    is there is another parameter need to set

    how i can use this in session level (mbrc)
    alter session set ---?
    rds
  • 5. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    HI
    How to set this mbrc parameter is this will take from dbfile_multiblock_read _count
    rds
  • 6. Re: DBMS STATS
    SatishKandi Guru
    Currently Being Moderated
    MBRC is same as db_file_multiblock_read_count (just a short form that is used).

    You can set this with ALTER SESSION
    SQL> alter session set db_file_multiblock_read_count=128;
    
    Session altered.
    
    SQL> alter session set workarea_size_policy=manual;
    
    Session altered.
    But the final value that is used by Oracle is derived from statistics and may not be the same as the one that you have set; hence, you need to test if this change helps you positively.
  • 7. Re: DBMS STATS
    Anand... Guru
    Currently Being Moderated
    Hi..

    You can refer to [http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/]

    HTH
    Anand
  • 8. Re: DBMS STATS
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    user1000000 wrote:
    I Am running dbms stats job to gather stats for one table which hase one partition, In this table after the partition exchange
    i rebuild the index , then i run the job for stats, but stats take 3 hours to complete , is thete is any way i can tune this tsts gathering
    i need to run this every day

    The following are the steps

    1) partition exchange
    2) rebuild index
    3) gather stats
    How long does it take to populate the table that is going to be exchanged?

    How do you populate the exchange table? This might be important due to "delayed block cleanout" issues.

    How long does the index rebuild take?

    Is this a global or a local index?

    If it is a local index you could already rebuild the index on the exchange table before exchanging the partition.
    If it is global you can use the "UPDATE GLOBAL INDEXES" clause of the "exchange partition" operation, so why do you rebuild the index? Does "UPDATE GLOBAL INDEXES" take too long?

    How large is the segment being exchanged (check DBA/ALL/USER_SEGMENTS)?

    You say that the table has one partition, is this a typo, or are there more partitions?

    If there are more partitions, what is the total size of all partitions?

    If there is only one partition, why is the table partitioned? If you only need it to load via exchange partition, then you should use the unpartitioned table as your "main" table and use the "partitioned" table as "load" table.
    how i can tune this gather stats , following is the syntax i am using

    EXECUTE DBMS_STATS.GATHER_TABLE_STATS('SIM','SENDUNG',METHOD_OPT => 'FOR ALL COLUMNS
    SIZE 1', DEGREE => 5 ,ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE,GRANULARITY => 'ALL',cascade=> FALSE);
    Since you're using GRANULARITY => 'ALL', it actually scans all levels of the segment: All subpartitions individually (if applicable), all partitions individually, and then finally the whole table consisting of all (sub-)partitions. So in case your table consists actually only of a single partition the table is actually scanned twice due to this parameter (once for PARTITION level, and again for GLOBAL level statistics).

    AUTO_SAMPLE_SIZE might determine that is needs a very large sample size (or even compute). Check the "SAMPLE_SIZE" column of DBA/ALL/USER_TABLES. If the sample size used is close to the actual row count you could experiment with a lower "estimate_percent" parameter to see if the statistics generated are still sufficiently representative to generate appropriate execution plans.

    You might want to run the DBMS_STATS job serially and trace the session to find out what it is waiting for most. Use "ALTER SESSION SET SQL_TRACE = TRUE" or extended trace (e.g. DBMS_SUPPORT package in 9i or DBMS_MONITOR in 10g) to enable tracing. Use the "tkprof" utility to analyze the trace file generated.

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
  • 9. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    Hi Randolf
    Thanks , i answered all you questions

    The reason why i use partition exchange is
    I load the data in stage table , then user need to access this data
    that is why i use exchange partition to move this data to another table
    and rebuild index.
    i create one partition for using exchange partition concept

    during the rebuild index the stats is gather for index automaticaly.

    only i need to gather stats for table & one partition.

    the syntax which i posted above is using for gather stats.

    in the dbms stats i put cascade => false to avoid calculate stats again




    How long does it take to populate the table that is going to be exchanged?

    using the sqlldr to load the data to stage table and partition exchange the
    same data in to other table (ie. permanent table only need this data)
    just in a few minutse it will altered

    How do you populate the exchange table? This might be important due to "delayed block cleanout" issues.

    I use EXCHANGE PARTITION P1 WITH TABLE tablename WITHOUT VALIDATION

    How long does the index rebuild take?

    2 hrs

    Is this a global or a local index?

    local index

    If it is a local index you could already rebuild the index on the exchange table before exchanging the partition.
    If it is global you can use the "UPDATE GLOBAL INDEXES" clause of the "exchange partition" operation, so why do you rebuild the index? Does "UPDATE GLOBAL INDEXES" take too long?

    How large is the segment being exchanged (check DBA/ALL/USER_SEGMENTS)?

    200000 rows

    You say that the table has one partition, is this a typo, or are there more partitions?

    only one partition for using exchange partition concept

    If there are more partitions, what is the total size of all partitions?
    no

    If there is only one partition, why is the table partitioned? If you only need it to load via exchange partition,
    then you should use the unpartitioned table as your "main" table and use the "partitioned" table as "load" table.

    is this will improve performance

    Best Rds
  • 10. Re: DBMS STATS
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    user1000000 wrote:
    Hi Randolf
    Thanks , i answered all you questions

    The reason why i use partition exchange is
    I load the data in stage table , then user need to access this data
    that is why i use exchange partition to move this data to another table
    and rebuild index.
    i create one partition for using exchange partition concept
    My point here is that you could use the "unpartitioned" table as your "users" table and the partitioned table as "staging table" that will be populated by SQL*Loader. This way you simplify the optimizer's work to perform and the statistics management on the final table. If you use "GRANULARITY=>'ALL'" then your partitioned table will effectively be scanned twice.

    Therefore I recommend to use the unpartitioned table as your "users" table.
    How long does it take to populate the table that is going to be exchanged?

    using the sqlldr to load the data to stage table and partition exchange the
    same data in to other table (ie. permanent table only need this data)
    just in a few minutse it will altered
    So you say to load the 200,000 records into thee staging table it takes only a couple of minutes using SQL*Loader, right?

    Or do you mean that the "exchange partition" operation takes only a couple of minutes?

    If yes, then how long does it take SQL*Loader to populate the staging table?
    How long does the index rebuild take?

    2 hrs
    I'm not sure why loading the data should take only a couple of minutes but rebuilding an index should take two hours, that sounds odd.
    How large is the segment being exchanged (check DBA/ALL/USER_SEGMENTS)?

    200000 rows
    Sorry, I obviously wasn't precise enough: I asked for the segment size in bytes or blocks, the row count doesn't influence how long a scan will need.

    Your rows must be very large so that it takes two hours to rebuild an index and more than three hours to gather the statistics. Let's say you have an already quite large average row size of 1500 bytes, then 200,000 rows allocate without any overhead approx. 300 MB. This should take less than a minute (60 seconds => 5 MB / sec. to scan 300 MB) to read rather than hours.

    Does your table contain long/lobs or large VARCHAR2 columns?

    Regarding your initial question about the DBMS_STATS performance: I've already provided some suggestions in my previous post apart from the "GRANULARITY" issue. Check the sample size as described, and try a lower estimate_percent setting if the AUTO_SAMPLE_SIZE uses a large sample size (or even computes rather than samples).

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
  • 11. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    HI Randolf
    Thanks for you help
    I will try to change the concept which you mention (partition in stage table and and not in permanent table)

    in the mean time what is you sugesstion for setting the granularity ?
    can i set it to auto , the reason why i set it to ALL is to gather stats for table and partition
    laat time i put like this GRANULARITY => 'PARTITION', this will not calculate the stats for table only calculate for partition

    i need to calculate for partition and table

    this is the detail which you ask#

    NUM_ROWS 106112498
    BLOCKS   1407688
    avg_row_len 90
    sample_size 5656123

    for the partition

    sample_size 5701238
    avg_row_len 90


    rds
  • 12. Re: DBMS STATS
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    user1000000 wrote:
    in the mean time what is you sugesstion for setting the granularity ?
    can i set it to auto , the reason why i set it to ALL is to gather stats for table and partition
    laat time i put like this GRANULARITY => 'PARTITION', this will not calculate the stats for table only calculate for partition

    i need to calculate for partition and table
    And what is the reason why you need to calculate the GLOBAL statistics and the PARTITION level statistics?

    If your table holds exactly one partition, why can't you always prune to that single, known partition, e.g. using explicit partition pruning:

    SELECT ... FROM <PART_TABLE> PARTITION (<PART_NAME>)

    This way it's ensured that always the partition level statistics will be used. If required, you could define a view that contains this partition pruning clause.

    Alternatively you can add a predicate that filters on the partition key accordingly. Of course this depends on your partition definition.

    SELECT ... FROM <PART_TABLE> WHERE <PART_KEY> = <VALUE>

    This applies obviously only if a single value is used as partition key, otherwise you need to use appropriate ranges etc.

    >
    this is the detail which you ask#

    NUM_ROWS 106112498
    BLOCKS   1407688
    avg_row_len 90
    sample_size 5656123

    for the partition

    sample_size 5701238
    avg_row_len 90
    Well, and how does your previous information regarding 200,000 rows fit into this picture? I see here 100 millions rows allocating approx. 11GB assuming a block size of 8KB.

    The auto sample size seems to have chosen approx. 20 percent, which is a lot for such a large table. You could experiment with a lower "estimate_percent" setting, like 10, 5, 3 or even 1 percent.

    Again: How many rows do you load using SQL*Loader and how long does this take? Do you use direct-path loading or conventional loading?

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
  • 13. Re: DBMS STATS
    Repos Newbie
    Currently Being Moderated
    Hi Randolf
    Thanks i will try with 5 or 10 % instead of auto sample size
    i am usin direct path with parallel clause for loadin data

    thanks & best rds

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points