This discussion is archived
9 Replies Latest reply: Nov 16, 2012 3:10 AM by Nikolay Savvinov RSS

Table has 80 million records - Performance impact if we stop archiving

974575 Newbie
Currently Being Moderated
HI All,

I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.

I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?

Any comments welcomed.
  • 1. Re: Table has 80 million records - Performance impact if we stop archiving
    P.Forstmann Guru
    Currently Being Moderated
    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.

    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.

    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.

    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .
  • 2. Re: Table has 80 million records - Performance impact if we stop archiving
    974575 Newbie
    Currently Being Moderated
    Thanks a lot for the insight mate.

    The average row size is somewhere 4k. The table has around 20 column basically address , date and time values. And their are around 5 unclustered indexes along with 4 unique constraints.

    I know DDL statements would have made much more sense but unfortunately i am not at liberty to disclose that information.

    The query that is trying to access this table is a basic select query picking up a group of 6 columns based upon a number of conditions.

    Thanks for the link it is useful !
  • 3. Re: Table has 80 million records - Performance impact if we stop archiving
    Dom Brooks Guru
    Currently Being Moderated
    See [url www.evdbt.com/TGorman%20TD2009%20DWScale.doc]Tim Gorman: Scaling to Infinity
  • 4. Re: Table has 80 million records - Performance impact if we stop archiving
    974575 Newbie
    Currently Being Moderated
    Their is another question that i would like to know about.

    Does creating indexes after creating stored procedure make any difference on performance ?

    another one of my 'Specialist' suggested that we should create all the indexes before we create and compile the stored procedure that are using the tables for querying as they build their own execution path and so will not be able to make full use of the newly created indexes as they sp are pre-compiled piece of code. To me this does not make any sense . But can anyone clarify ?
  • 5. Re: Table has 80 million records - Performance impact if we stop archiving
    user296828 Expert
    Currently Being Moderated
    As mentioned earlier as well oracle can manages such number of records even more.

    For such large tables oracle offered feature is..
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112
  • 6. Re: Table has 80 million records - Performance impact if we stop archiving
    user296828 Expert
    Currently Being Moderated
    Does creating indexes after creating stored procedure make any difference on performance ?
    NO,

    another one of my 'Specialist' suggested that we should create all the indexes before we create and compile the stored procedure that are using the tables for querying as they build their own execution path and so will not be able to make full use of the newly created indexes as they sp are pre-compiled piece of code. To me this does not make any sense . But can anyone clarify ?
    You are right,
  • 7. Re: Table has 80 million records - Performance impact if we stop archiving
    Dom Brooks Guru
    Currently Being Moderated
    Does creating indexes after creating stored procedure make any difference on performance ?
    Doesn't make any difference.
    as they build their own execution path and so will not be able to make full use of the newly created indexes as they sp are pre-compiled piece of code
    This is nonsense.

    You're concerned about the execution plans of SQL statements in your PLSQL.

    Execution plans are evaluated at runtime not compile time, and at runtime whether an actual optimisation of any execution plan is done depends on whether a reusable plan already exists in memory.

    When you create an index, dependent execution plans are invalidated and therefore at subsequent runtime re-evaluated.

    So, what your specialist friend is saying is a) worrying and b) rubbish.
  • 8. Re: Table has 80 million records - Performance impact if we stop archiving
    Mohamed Houri Pro
    Currently Being Moderated
    another one of my 'Specialist' suggested that we should create all the indexes before we create 
    and compile the stored procedure that are using the tables for querying as they build their own 
    execution path and so will not be able to make full use of the newly created indexes as they 
    sp are pre-compiled piece of code. To me this does not make any sense . But can anyone clarify ?
    :-)

    You should tell your specialist that indexes are not used during the compilation/creation of the procedure. Indexes are used by the CBO during the execution of a SQL query. This SQL query could be a part of your stored procedure and hence they might be used during the execution of the PL/SQL stored procedure/function and not during compilation or creation

    Best Regards

    Mohamed Houri
    www.hourim.wordpress.com
  • 9. Re: Table has 80 million records - Performance impact if we stop archiving
    Nikolay Savvinov Guru
    Currently Being Moderated
    Hi,

    as a non-partitioned table grows, performance will inevitably degrade. In the best case scenario, it will degrade logarithmically. But more realistically, for some queries/DML you can expect linear growth or even worse. More importantly, all maintenance operations against that table and its indexes will take longer.

    The ideal solution for such problems it partitioning. In 11g, there are some nice features which facilitate information lifecycle management with partitioning -- you can easilly (or even automatically) compress/move/delete old data. So I think partitioning (if your license covers it) is the right way to go for you, but you'll have to spend some time deciding on details -- partition granularity, archiving strategies, and most importantly, choosing between local and global indexes.

    Best regards,
    Nikolay

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points