This discussion is archived
1 2 3 4 Previous Next 53 Replies Latest reply: May 14, 2009 12:47 PM by Randolf Geist Go to original post RSS
  • 30. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hi Jonathan,

    My Sincere Apologies, I misunderstood that one aspect here, I am using ASSM table space only.

    I will posting the runtime execution plan with the predicate information shortly. I would also want to you to remember that there is flow once again.

    Intially i insert a row in to a table A, which inturns fires a Insert After trigger , This trigger has business logic to update a row in one table B, and insert another row in one more table YYYY, Before inserting row in to YYY table we first fire problematic select query on the YYYY table (which we have seen before).

    Note: One important thing i forgot to mention is all these 16 threads will only commit at the end of the transaction, Untill that all the rows in A, B and YYYY are not seen from another SQL session.

    Thanks & Best Regards
    Satish

    Edited by: Satish Kumar Ballepu on May 14, 2009 6:00 AM
  • 31. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hello Jonathan,

    Please find the runtime plan of the problematic sql.


    {
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    SQL_ID gqb7gps2yc981, child number 0
    -------------------------------------
    SELECT X1 ,X2 FROM YYY WHERE LPTID = :B2 AND CCC = :B1

    Plan hash value: 3592773895

    -----------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    -----------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 1 (100)| |
    | 1 | TABLE ACCESS BY INDEX ROWID| YYYY | 1 | 46 | 1 (0)| 00:00:01 |
    |* 2 | INDEX UNIQUE SCAN | CCC_IDX | 1 | | 1 (0)| 00:00:01 |
    -----------------------------------------------------------------------------------------------

    Predicate Information (identified by operation id):
    ---------------------------------------------------

    2 - access("LPTID"=:B2 AND "CCC"=:B1)

    }

    Please let me know if i am missing any thing else

    Regards
    Satish
  • 32. Re: Oracle 10g performance degrades while concurrent inserts into a table
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    Satish Kumar Ballepu wrote:
    Hello Jonathan,

    Please find the runtime plan of the problematic sql.
     
    |*  2 |   INDEX UNIQUE SCAN         | CCC_IDX |     1 |       |     1   (0)| 00:00:01 |
    
    2 - access("LPTID"=:B2 AND "CCC"=:B1)
    That's interesting - maybe there is something relevant in the example I posted on my blog.

    However, it's also possible that some people can run this query and use the index while other people disable the index and end up doing a tablescan.

    Did you get this plan by setting up some bind variables in a SQL*Plus session and running it yourself, or did capture it while the main process was running ?

    If you're running it yourself, can you enable sql_trace (or the 'gather_plan_statistics' option) so that you can check how many consistent reads you do when running the statement.

    If you've got AWR (or statspack at level 6) running while this is going on, this statement should be captured with plans in the repository - in which case you will be able to user sprepsql.sql or awrsqrpt.sql to report ALL the execution plans it has every used.

    Bottom line - your tkprof showed a huge number of CR gets per execution and we need to know (a) is it telling the truth, and if so (b) why that's happening.

    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
  • 33. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hi Jonathan

    Thanks for quick resosne,

    The execution plan was taken when the main process was running

    There is only one plan for this sql statement in the awrsqrpt report .


    Thanks,
    Satish
  • 34. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Will the ASSM tablespace cause such huge consistent reads ...

    I would also want to you to remember that there is flow once again.

    Intially i insert a row in to a table A, which inturns fires a Insert After trigger , This trigger has business logic to update a row in one table B, and insert another row in one more table YYYY, Before inserting row in to YYY table we first fire problematic select query on the YYYY table (which we have seen before).


    As I said before all these 16 threads will only commit at the end of the transaction, Untill that all the rows in A, B and YYYY are not seen from another SQL session.

    Thanks,
    Sathish

    Edited by: Satish Kumar Ballepu on May 14, 2009 7:05 AM
  • 35. Re: Oracle 10g performance degrades while concurrent inserts into a table
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    Satish Kumar Ballepu wrote:

    The execution plan was taken when the main process was running
    There is only one plan for this sql statement in the awrsqrpt report .
    Fascinating - I take it the one in the awr report is the index unique scan ?
    Can you query the v$sql row for that sql_id to get:
    invalidations, parse_calls, executions, fetches, rows_processed, disk_reads, buffer_gets, cpu_time, elapsed_time.

    With the concurrent processes you've got, do they update a lot of data (possibly one row at a time) in a small section of the table and only commit when they've updated all the data they need to change - or does any other process do something of the same sort as your job is running ? (I wasn't sure from your description whether a transaction was one pass through the cycle you described, or lots).


    Extra thought: If you can monitor v$sess_io while these processes are running, do you see a very large number of "consistent_changes" - perhaps a significant fraction of "consistent_gets" and larger than "block_changes" - if so the problem may actualy be the one I set up on my blog.



    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk


    "The temptation to form premature theories upon insufficient data is the bane of our profession."
    Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear".
  • 36. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hi Jonathan ,

    Thanks once again for immediate response , Pelase find the information asked for-


    All 16 threads are running now. I am taking these statistisc online,
    Each thread will process one row at a time and every thread will commit only when complete set of records are been inserted.

    As format has having some issues, i have seperated them with comma instead of tabbed spaces.

    {

    INVALIDATIONS,PARSE_CALLS,EXECUTIONS,FETCHES,ROWS_PROCESSED,DISK_READS,BUFFER_GETS,CPU_TIME,ELAPSED_TIME
    0,43,302753,302738,302737,18368,3408080360,5.8946E+10,5.9141E+10


    select invalidations, parse_calls, executions, fetches, rows_processed, disk_reads, buffer_gets, cpu_time, elapsed_time from v$sql where sql_id='gqb7gps2yc981';


    SID CONSISTENT_GETS CONSISTENT_CHANGES BLOCK_CHANGES
    ---------- --------------- ------------------ --------------------------------------------------
    12 296056775 294639800 422411
    15 143586392 142679661 286607
    20 289559513 288321992 388823
    43 322879433 321490856 445073
    46 162329412 161617954 211020
    50 161552418 160856836 230932
    52 162293618 161583871 231630
    87 300234979 298670773 511110
    88 146568693 145889382 224219
    96 298368529 296508004 619528
    107 145449749 144844922 178159
    110 317738079 316359743 423094
    121 317491436 315981082 442809
    123 316371658 314603256 597941
    158 151195244 150542752 197364
    167 300297662 298474274 571502

    16 rows selected.


    }
  • 37. Re: Oracle 10g performance degrades while concurrent inserts into a table
    680087 Pro
    Currently Being Moderated
    Hi,
    3. Oracle Version is 10.2.0
    What is your exact Oracle version?
  • 38. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Following is my database version.

    SQL> select * from v$version;

    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for Solaris: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production

    Edited by: Satish Kumar Ballepu on May 14, 2009 7:41 AM
  • 39. Re: Oracle 10g performance degrades while concurrent inserts into a table
    26741 Oracle ACE
    Currently Being Moderated
    Could you take a few snapshots of segment statistics on the YYYY table ?
    exec dbms_stats.flush_database_monitoring_info;
    select to_char(sysdate,'DD-MON HH24:MI:SS'), statistic_name, value
    from v$segment_statistics
    where owner='&owner'
    and object_type = '&table'
    and object_name = 'YYYY'
    and (statistic_name like 'physical%' or statistic_name like 'logical%')
    order by 1,2
    /
    every 10minutes or 30 minutes ?

    or, to get historical stats from the AWR repository
    select o.object_name,
    to_char(t.end_interval_time,'DD-MON HH24:MI:SS'),
    s.logical_reads_delta,
    s.buffer_busy_waits_delta,
    s.db_block_changes_delta,
    s.itl_waits_delta,
    s.space_used_total,
    s.space_used_delta,
    s.space_allocated_total,
    s.space_allocated_delta,
    s.table_scans_total,
    s.table_scans_delta
    from dba_hist_seg_stat s, dba_hist_seg_stat_obj o, dba_hist_snapshot t
    where o.owner='&owner'
    and o.object_name = '&table_or_index'
    and o.obj#=s.obj#
    and o.dataobj#=s.dataobj#
    and s.snap_id=t.snap_id
    order by t.end_interval_time
    /
  • 40. Re: Oracle 10g performance degrades while concurrent inserts into a table
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    Satish Kumar Ballepu wrote:
    SID CONSISTENT_GETS CONSISTENT_CHANGES BLOCK_CHANGES
    ---------- --------------- ------------------ --------------------------------------------------
    12       296056775          294639800        422411
    15       143586392          142679661        286607
    20       289559513          288321992        388823
    43       322879433          321490856        445073
    46       162329412          161617954        211020
    50       161552418          160856836        230932
    52       162293618          161583871        231630
    87       300234979          298670773        511110
    88       146568693          145889382        224219
    96       298368529          296508004        619528
    107       145449749          144844922        178159
    110       317738079          316359743        423094
    121       317491436          315981082        442809
    123       316371658          314603256        597941
    158       151195244          150542752        197364
    167       300297662          298474274        571502
    
    16 rows selected.
    Fantastic, I don't think I've seen one like this for about 15 years.
    Look at session 12, it's made 422,000 block changes, but it's used 294 million visits to undo blocks to create read consistent blocks. Roughly less than one percent of the work done by that session is "constructive" the rest of it is spent "going backwards".

    We can check that with a query like this:
    column name format a40
    column value format 9,999,999,999
    
    select
         sn.name, ss.value
    from
         v$sesstat     ss,
         v$statname     sn
    where
         ss.sid = 12
    and     ss.value != 0
    and     sn.statistic# = ss.statistic#
    ;
    When you post the results for one of these sessions, put the word "code" in curly brackets, no spaces, before and after the output. (See my footnote).

    I think your 16 processes are all updating the same small piece of the database constantly, and each session has to make that piece of the database read-consistent before it can query it again. So everyone modifies block X, and I have to unwind 15 other changes before I can read block X; then everyone modifies it again - and I have to unwind 30 changed before I can read it; and so on.

    Whatever you're doing to change the data in the blocks that are being queried, somehow you either have to commit very regularly (which is probably technically incorrect or illegal) or you have to separate the activity of the 16 processes so that that do their work in 16 separate parts of the database (for example is it possible to use list partitioning to separate the activity of the sessions).

    One thought - you may find that if you run only 4 concurrent processes (and have each do four of the 16 batches one after the other) that you complete the whole job faster because you will spend a lot less time unwinding only 3 other sessions at any one time.

    (Another odd thought - unlikely, but it's possible that you've found another ASSM bug: we'll postpone thinking about that for a while, though, until we have pursued the concurrency issue).


    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}
    {noformat} (lowercase, curly brackets, no spaces) so that the text appears in 
    fixed format
    .
    
    
    "Science is more than a body of knowledge; it is a way of thinking" 
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
  • 41. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hi Hemant,

    I ran the sql queries mentioned. After flushing database monitoring infor i executed first query, It responded with 'no rows'.

    For second query below are results for today (ignored the previous day's resulsts)

    {

    OBJECT_NAME TO_CHAR(T.END_I LOGICAL_READS_DELTA BUFFER_BUSY_WAITS_DELTA DB_BLOCK_CHANGES_DELTA ITL_WAITS_DELTA SPACE_USED_TOTAL SPACE_USED_DELTA SPACE_ALLOCATED_TOTAL SPACE_ALLOCATED_DELTA TABLE_SCANS_TOTAL TABLE_SCANS_DELTA
    YYYY 13-MAY 07:00:05 641504 0 321488 0 412961 0 4194304 0 0 0
    YYYY 13-MAY 08:00:17 897920 0 452752 0 417057 4096 4194304 0 0 0
    YYYY 13-MAY 09:00:31 262624 0 22084896 0 928896 928896 2097152 2097152 0 0
    YYYY 13-MAY 09:00:31 618256 184 65376320 0 934057 934057 2097152 2097152 0 0
    YYYY 13-MAY 09:00:31 1257904 1869 146579408 0 936105 936105 2097152 2097152 0 0
    YYYY 13-MAY 09:00:31 881488 2972 107073824 0 934057 934057 2097152 2097152 0 0
    YYYY 13-MAY 10:00:45 117776 0 30368 0 936105 936105 2097152 2097152 0 0
    YYYY 13-MAY 10:00:45 113296 0 29376 0 934057 934057 2097152 2097152 0 0
    YYYY 13-MAY 10:00:45 113808 0 29952 0 934057 934057 2097152 2097152 0 0
    YYYY 13-MAY 10:00:45 116384 0 30784 0 928896 928896 2097152 2097152 0 0
    YYYY 13-MAY 12:00:12 356464 0 70800 0 936105 0     2097152 0 0 0
    YYYY 13-MAY 12:00:12 634864 0 125920 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 12:00:12 507072 0 100112 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 12:00:12 476208 0 93648 0 928896 0     2097152 0 0 0
    YYYY 13-MAY 13:00:25 180768 0 35632 0 936105 0     2097152 0 0 0
    YYYY 13-MAY 13:00:25 141712 0 28688 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 13:00:25 120752 0 24640 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 13:00:25 148544 0 29168 0 928896 0     2097152 0 0 0
    YYYY 13-MAY 15:00:50 602544 28 297580624 0 936105 0     2097152 0 0 0
    YYYY 13-MAY 15:00:50 614192 18 94822816 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 15:00:50 638608 12 332626960 0 928896 0     2097152 0 0 0
    YYYY 13-MAY 15:00:50 593568 0 81318384 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 16:00:02 44880 13 108230288 0 934057 0     2097152 0 0 0
    YYYY 13-MAY 16:00:02 49264 12 346351392 0 928896 0     2097152 0 0 0
    YYYY 13-MAY 16:00:02 18208 0 134412432 0 936105 0 2097152 0 0 0
    YYYY 14-MAY 06:00:58 210640 257 12066688 0 398461 398461 524288 524288 0 0
    YYYY 14-MAY 09:47:32 322624 1383 1726057840 0 403622 5161 524288 0 0 0
    YYYY 14-MAY 11:01:00 202064 1035 3160500240 0 403622 0 524288 0 0 0

    95 rows selected.
    }

    Regards
    Satish
  • 42. Re: Oracle 10g performance degrades while concurrent inserts into a table
    700680 Newbie
    Currently Being Moderated
    Hi Jonathan,

    Thanks for reply, You have rightly pointed out all the 16 processors are trying to update/insert in to small piece of data base, ie., YYYY (small table ), I cannot go with an option of intermediate committing, If i try to do the task with 4 threads then i don't see any issue, I am facing this issue only upon increasing the multiple threads,


    Please find the output of the sql query asked for -
    select      sn.name, ss.value from      v$sesstat     ss,      v$statname     sn where      ss.sid = 12 and     ss.value != 0 and     sn.statistic# = ss.statistic#;
    
    
    NAME                                              VALUE
    ---------------------------------------- --------------
    logons cumulative                                     1
    logons current                                        1
    opened cursors cumulative                         9,493
    opened cursors current                               18
    user commits                                          2
    user rollbacks                                        5
    user calls                                       40,550
    recursive calls                                 189,070
    recursive cpu usage                             702,802
    session logical reads                       458,008,242
    CPU used when call started                      710,040
    CPU used by this session                        710,112
    DB time                                         851,867
    concurrency wait time                             1,463
    user I/O wait time                              172,041
    session connect time                      1,242,306,303
    process last non-idle time                1,242,306,303
    session uga memory                            1,008,872
    session uga memory max                        1,764,552
    messages sent                                     6,677
    session pga memory                            3,427,496
    session pga memory max                        4,279,464
    enqueue waits                                        84
    enqueue requests                                  1,493
    enqueue releases                                  1,478
    physical read total IO requests                 287,861
    physical read total bytes                 2,358,157,312
    db block gets                                   478,176
    db block gets from cache                        478,176
    consistent gets                             457,531,180
    consistent gets from cache                  457,531,208
    consistent gets - examination               457,364,188
    physical reads                                  287,861
    physical reads cache                            287,861
    physical read IO requests                       287,861
    physical read bytes                       2,358,157,312
    db block changes                                514,647
    consistent changes                          455,795,774
    change write time                                   848
    redo synch writes                                 9,293
    redo synch time                                   2,936
    exchange deadlocks                                   22
    free buffer requested                           407,356
    dirty buffers inspected                          26,400
    pinned buffers inspected                          2,340
    hot buffers moved to head of LRU              3,473,975
    free buffer inspected                         1,357,619
    commit cleanout failures: cannot pin                  3
    commit cleanouts                                  2,533
    commit cleanouts successfully completed           2,530
    CR blocks created                               111,419
    switch current to new buffer                         40
    write clones created in foreground                  252
    shared hash latch upgrades - no wait          1,035,447
    shared hash latch upgrades - wait                29,039
    calls to kcmgcs                                   1,278
    calls to kcmgas                                 395,720
    calls to get snapshot scn: kcmgss               170,114
    redo entries                                    312,734
    redo size                                    85,852,616
    redo buffer allocation retries                       11
    redo log space requests                              19
    redo log space wait time                             57
    redo ordering marks                               3,450
    redo subscn max counts                           10,226
    undo change vector size                      26,189,488
    data blocks consistent reads - undo reco    455,794,398
    rds applied
    no work - consistent read gets                   35,164
    cleanouts only - consistent read gets                 7
    rollbacks only - consistent read gets             3,715
    cleanouts and rollbacks - consistent rea        107,703
    d gets
    rollback changes - undo records applied              18
    transaction rollbacks                                 9
    immediate (CURRENT) block cleanout appli          2,553
    cations
    immediate (CR) block cleanout applicatio        107,710
    ns
    deferred (CURRENT) block cleanout applic             74
    ations
    commit txn count during cleanout                    792
    active txn count during cleanout              1,461,051
    cleanout - number of ktugct calls               113,633
    commit batch/immediate requested                      9
    commit immediate requested                            9
    commit batch/immediate performed                      9
    commit immediate performed                            9
    table scans (short tables)                           94
    table scan rows gotten                          131,207
    table scan blocks gotten                          2,225
    table fetch by rowid                             18,782
    cluster key scans                                 3,359
    cluster key scan block gets                      13,258
    rows fetched via callback                        18,617
    index crx upgrade (positioned)                       19
    leaf node splits                                    823
    branch node splits                                    4
    index fetch by key                              111,627
    index scans kdiixs1                              19,485
    heap block compress                              35,481
    session cursor cache hits                        28,651
    session cursor cache count                           20
    cursor authentications                                1
    buffer is pinned count                           34,671
    buffer is not pinned count                       47,709
    workarea executions - optimal                        72
    parse time cpu                                      169
    parse time elapsed                                  132
    parse count (total)                              28,702
    parse count (hard)                                    1
    execute count                                   168,338
    bytes sent via SQL*Net to client              7,510,373
    bytes received via SQL*Net from client       82,959,043
    SQL*Net roundtrips to/from client                40,542
    sorts (memory)                                       79
    sorts (rows)                                      3,256
    
    112 rows selected.
    Regards
    Satish
  • 43. Re: Oracle 10g performance degrades while concurrent inserts into a table
    26741 Oracle ACE
    Currently Being Moderated
    You have very high DB_BLOCK_CHANGES but not very high LOGICAL_READS.

    Thus, the same blocks are being updated very frequently. The Session Statistics on consistent gets are very high not on the table's blocks but on reads from Undo for read consistency. The concurrent sessions are hitting the same blocks and repeatedly re-generating a read consistent image from Undo.
  • 44. Re: Oracle 10g performance degrades while concurrent inserts into a table
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated
    The most relevent stats are these:

    >
    CR blocks created                               111,419
    data blocks consistent reads - undo reco    455,794,398
    rds applied
    You've used 455 million undo records to create 111 thousand block images - an average of 4,000 undo records per block. As time passes, of course, some of those undo blocks might end up coming off disc.

    There are other (much less significant) signs of the same thing happening on other statements (which probably bring the average down from the 50,000 per block to 4,000 per block), but the simple select seems to be the reall killer. Somehow you need to spread the workload of those 16 processes across more blocks so that each is subject to less undo.

    What is the significance of LPTID and CCC ? Is there some way of associating specific values with specific processes ?

    You mentioned that you have tried hash partitioning the table - on which column, how many partitions, and did you make the unique index locally partitioned. (At the moment we don't know whether the problem is in the table, the index, or both).

    ** comment about freelists deleted - user is on ASSM ** If the table always contains some data, then rebuilding the index with a very large value for pctfree (say 99) might spread the index data thinly enough to reduce the problem. The problem is, until we know what the significance of the data is, we don't really know why the pattern of activity is causing so much read-consistency work so we can't begin to suggest an apprpriate solution.

    One thing that puzzles me: in the trace file extract you showed 6,000 queries against the table, but only 6 inserts. This doesn't seem like a fast enough rate of change to cause a problem. Why would this happen - is it because you are selecting to check for the existence of a row before inserting it, and most of the time (after the first few minutes perhaps) the row exists ?


    UPDATE: Setting the PCTFREE on the table to a very high value could also reduce the problem on the table - again going to the extreme of 99 might not be a bad idea, but you might want to start with a slightly less extreme value (but since your rows are very small I'd be thinking about values like 90 and over). How many rows do you insert in total ? Is this a constantly growing table, or does it get emptied before you start - if so how ? And what's your blocksize by the way ?


    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk


    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

    Edited by: Jonathan Lewis on May 14, 2009 5:24 PM
    Removed comments about Freelists - different problem.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points