This discussion is archived
1 2 3 Previous Next 30 Replies Latest reply: May 30, 2010 12:38 AM by mbobak Go to original post RSS
  • 15. Re: What is fastest way to insert millions rows of data to another table
    Hemant K Chitale Oracle ACE
    Currently Being Moderated
    we encountered "ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDO1'"
    Did you monitor the size of the transaction (e.g. in V$TRANSACTION) ?

    INSERT APPEND does not generate Undo for the table but Undo would be generated for the Indexes on the target table. Did you have indexes present and enabled ?


    You shouldn't be using the PLSQL loop. A straightforward INSERT in one pass (as has already been recommended by Mark Bobak and "SomeoneElse") is what you should be doing -- as you were doing earlier.


    Hemant K Chitale
    http://hemantoracledba.blogspot.com
  • 16. Re: What is fastest way to insert millions rows of data to another table
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    user4184769 wrote:
    Initially, we loaded data directly from my_staging_table without looping the ids, and here was the code:

    INSERT /*+ append */ INTO my_target_table
    SELECT /*+ parallel */ * FROM my_staging_table;

    however, we encountered "ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDO1'", and
    we broke it down the statement by looping the ids, and it seemed resolving the problem.
    As already pointed out by Hemant and others:

    If you were doing what you've outlined above, in particular set the indexes to unusable, then the direct-path insert is not supposed to generate any undo.

    So either there were indexes still enabled that required index maintenance, or you've hit one of the limitation that prevent direct-path inserts, for example enabled foreign-keys, deferrable constraints or enabled triggers on the target table.

    That might also explain why your load takes longer than expected.

    See here: http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_9014.htm#i2163698 and here: http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#CACEJACE

    for a list of restriction on direct-path inserts and parallel DML.

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    Co-author of the "OakTable Expert Oracle Practices" book:
    http://www.apress.com/book/view/1430226684
    http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684
  • 17. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Hi Hemant,

    If we have index enable and we use direct load then isn't it would be faster then disabling and then rebuilding ?

    For Latter we have to disable index + load data + rebuild index.
  • 18. Re: What is fastest way to insert millions rows of data to another table
    mbobak Oracle ACE
    Currently Being Moderated
    Hi Taral,

    I know you asked Hemant, but I'll go ahead and answer.
    Taral wrote:
    Hi Hemant,

    If we have index enable and we use direct load then isn't it would be faster then disabling and then rebuilding ?
    No absolutely not. It's much faster to invalidate index, load data, and rebuild index.

    The main reason is that index maintenance must log. It's not possible to do direct load on indexes. So, even if you do a direct load, index maintenance will log both undo and redo.

    Hope that helps,

    -Mark
  • 19. Re: What is fastest way to insert millions rows of data to another table
    Hemant K Chitale Oracle ACE
    Currently Being Moderated
    Let me put it this way.

    Say I have a table that has 10million rows. I am loading (direct path, please), say 500thousand rows. I might choose to not disable the index and let the direct path operation merge the new index values. But then, again, that table might actually have been partitioned (say as 10 partitions) and the index be LOCAL.  In that case, I would set unusable only the specific index partition, load (direct path) data into the target table partition and then rebuild the specific index partition.

    But if I were loading, say, 3million rows, I would certainly make the index unusable and rebuild it.


    So, marking the indexes UNUSABLE and REBUILDing them after the load is the correct thing to do.

    See http://hemantoracledba.blogspot.com/2008/05/append-nologging-and-indexes.html

    Going back to your previous post the OP post, you do he does indicate that you make the indexes unusable
    alter session enable parallel dml;
    
    truncate table my_staging_table;
    
    mark all indexes in my_staging_table to be unusable;
    
    insert /*+ append */ into my_staging_table
    select * from my_target_table
    where status 'D';
    
    commit;
    
    rebuid indexes on my_staging_table;
    
    truncate table my_target_table;
    
    mark all indexes in my_target_table to be unusable;
    
    for i in (select distinct id from id_source)
    loop
    
    execute immediate 'insert /*+ append */ my_target_table
    select * from my_staing_table
    where id = :id' using i.id;
    
    commit;
    end loop;
    
    rebuild all indexes in my_target_table;
    
    truncate table my_staging_table;
    So I guess that your first method (where you encountered Undo space issues) was without making the indexes unusable.

    What you need to do is
    1. Stop using the PLSQL loop
    2. Mark the indexes unusable
    3. Load the data as a single INSERT .. APPEND .. PARALLEL .. SELECT .. PARALLEL operation
    3. Rebuild the indexes with PARALLEL and NOLOGGING

    What you also need to do is to identify how you can partition the table.


    Hemant K Chitale
    http://hemantoracledba.blogspot.com

    Edited by: Hemant K Chitale on May 26, 2010 10:06 AM
  • 20. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Thanks Hemant For this

    Yes, for partition table this i agree but for heap tables. I don't know may be i am wrong. (Tested 10.2.0.3)
    create table load_data nologging as select * from dba_objects;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    drop table big_ind_test purge;
    create table big_ind_test nologging as select rownum p_id , x.* from load_data x;
    exec dbms_stats.gather_table_stats(user,'BIG_IND_TEST');
    create bitmap index id_1_bit on big_ind_test(object_id);
    create bitmap index id_2_bit on big_ind_test(object_type);
    create bitmap index id_3_bit on big_ind_test(p_id);
    
    
    select count(*) from load_data ; -- 8290848
    
    alter system flush buffer_cache;
    
    alter system flush buffer_cache;
    
    exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    commit;
    
    exec dbms_monitor.session_trace_disable;
    -----
    alter system flush buffer_cache;
    
    alter system flush buffer_cache;
    
    exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    alter index id_1_bit unusable;
    alter index id_2_bit unusable;
    alter index id_3_bit unusable;
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    commit;
    
    alter index id_1_bit rebuild;
    alter index id_2_bit rebuild;
    alter index id_3_bit rebuild;
    
    exec dbms_monitor.session_trace_disable;
    
    Index enable and inserting data
    =================================
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        3      0.00       0.00          0          0          0           0
    Execute      4     84.08     127.48      13395    7585049    9501439     1000001
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        7     84.08     127.49      13395    7585049    9501439     1000001
    
    
    Load with disabled index
    ==========================
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse       11      0.04       0.13          3         50          1           0
    Execute     12    275.31     476.80     199302    4895692      35130     1000001
    Fetch        4      0.02       0.50         63        216          0           6
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       27    275.37     477.44     199368    4895958      35131     1000007
  • 21. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Thanks Mark for Explanation

    Yes, i thought it would be faster but when i read this and tried it was faster.

    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6601312252730#2323213100346095738

    Followup   January 11, 2010 - 8pm Central time zone:

    you know, it all depends.

    it depends on what you are trying to speed up.


    if you direct path load, it would probably be dandy to keep the indexes in place (best to keep in place) and load. We'll build the indexes in bulk as we load (avoiding the scanning and re-scanning of the partition for each index) and no one will hit undo read consistency issues because you load above the high water mark (they just don't see that data)

    if you do this right into the table and don't maintain the indexes - prior to 11gR2, it could cause failures in queries after you load, but before you rebuild the local index partition.


    so, are you direct pathing or not? describe your load.
  • 22. Re: What is fastest way to insert millions rows of data to another table
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    Taral wrote:
    Index enable and inserting data
    =================================
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        3      0.00       0.00          0          0          0           0
    Execute      4     84.08     127.48      13395    7585049    9501439     1000001
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        7     84.08     127.49      13395    7585049    9501439     1000001
    
    
    Load with disabled index
    ==========================
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse       11      0.04       0.13          3         50          1           0
    Execute     12    275.31     476.80     199302    4895692      35130     1000001
    Fetch        4      0.02       0.50         63        216          0           6
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total       27    275.37     477.44     199368    4895958      35131     1000007
    Interesting results - but did you notice that the "Load with disabled index" actually had to perform less work than the "Index enable and inserting data"? If you check the total consistent and current gets, there is a significant overhead with the first one.

    Very likely the difference in runtime is caused by the 15 times higher number of physical I/Os (13395 vs. 199302) - therefore the question would be, what exactly caused this excess physical I/O?

    Can you easily identify which of the steps mainly contributed to that total "disk" activity? It could be the index rebuilds and it might be caused by writes / reads to the TEMP tablespace which show up as "direct path read / write temp" wait events. If this was the case, increasing your PGA aggregate target or switching to manual workarea sizing might help to reduce the TEMP activity significantly and therefore could reduce the overall runtime / workload of this scenario.

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    Co-author of the "OakTable Expert Oracle Practices" book:
    http://www.apress.com/book/view/1430226684
    http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684
  • 23. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Thanks Randolf for information. But i see major contribution from re-reading table again for fetching data for index
    Index Enable
    ============
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from  load_data x where rownum < 1000000
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1     84.07     127.45      13394    7585048    9501394      999999
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2     84.07     127.46      13394    7585048    9501394      999999
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          1  LOAD AS SELECT  (cr=7585048 pr=13394 pw=9066 time=127457133 us)
     999999   COUNT STOPKEY (cr=8721 pr=8746 pw=0 time=13028910 us)
     999999    TABLE ACCESS FULL LOAD_DATA (cr=8721 pr=8746 pw=0 time=13028890 us)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                      4641        0.18          2.94
      control file sequential read                   10        0.00          0.00
      db file scattered read                        154        0.32         12.31
      direct path write                               2        0.00          0.00
      undo segment extension                        303        0.00          0.03
      buffer busy waits                               1        0.00          0.00
      latch: cache buffers chains                     5        0.01          0.02
      log file switch completion                      6        0.08          0.24
      log file sync                                   1        0.04          0.04
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
      
    Index Disable
    =============
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      5.88      18.48       8761       8764       9145      999999
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2      5.88      18.49       8761       8764       9145      999999
    
    Rows     Row Source Operation
    -------  ---------------------------------------------------
          1  LOAD AS SELECT  (cr=8764 pr=8761 pw=9066 time=18485256 us)
     999999   COUNT STOPKEY (cr=8721 pr=8746 pw=0 time=12029023 us)
     999999    TABLE ACCESS FULL LOAD_DATA (cr=8721 pr=8746 pw=0 time=12029006 us)
    
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      control file sequential read                   10        0.00          0.00
      db file sequential read                        13        0.01          0.02
      db file scattered read                        152        0.69         11.64
      direct path write                               2        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    
    First
    ------------
    alter index id_1_bit rebuild
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.01          3         16          1           0
    Execute      1     23.96     108.71      84945      85404       5769           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2     23.96     108.72      84948      85420       5770           0
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1353        0.90         84.15
      latch: cache buffers chains                     1        0.00          0.00
      db file sequential read                        23        0.02          0.06
      direct path write                               2        0.01          0.01
      reliable message                                1        0.00          0.00
      rdbms ipc reply                                 1        0.06          0.06
      log file sync                                   1        0.05          0.05
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.07          0.07  
    
    Second
    ------------
    alter index id_2_bit rebuild
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.01       0.06          0         17          0           0
    Execute      1     10.05      53.95      51940      85423        750           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2     10.06      54.01      51940      85440        750           0
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        40        0.00          0.00
      db file scattered read                        826        0.33         44.59
      direct path write                               8        0.01          0.01
      enq: RO - fast object reuse                     2        0.00          0.00
      rdbms ipc reply                                 1        0.04          0.04
      log file sync                                   1        0.06          0.06
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.07          0.07  
    
    Third
    ------------
    alter index id_3_bit rebuild
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.02          0         17          0           0
    Execute      1    235.41     295.65      53651    4716100      19415           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2    235.41     295.67      53651    4716117      19415           0
    
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        65        0.02          0.05
      db file scattered read                        858        1.05         30.32
      latch: cache buffers chains                     1        0.03          0.03
      log file switch completion                      4        0.13          0.24
      direct path write                               2        0.01          0.01
      reliable message                                1        0.00          0.00
      rdbms ipc reply                                 1        0.02          0.02
      log file sync                                   1        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.04          0.04  
  • 24. Re: What is fastest way to insert millions rows of data to another table
    mbobak Oracle ACE
    Currently Being Moderated
    Taral,

    I see a problem with your test. Please make your indexes NOLOGGING at create time and rebuild time, and see if that helps.

    I think you'll find it makes a big difference. Here's a slightly modified version of your script that I ran:
    SQL> @my_test
    SQL> drop table load_data;
    
    Table dropped.
    
    Elapsed: 00:00:00.06
    SQL> create table load_data nologging as select * from dba_objects;
    
    Table created.
    
    Elapsed: 00:00:00.72
    SQL> insert /*+ append */ into load_data select * from load_data;
    
    80821 rows created.
    
    Elapsed: 00:00:00.21
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.00
    SQL> insert /*+ append */ into load_data select * from load_data;
    
    161642 rows created.
    
    Elapsed: 00:00:00.27
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.00
    SQL> insert /*+ append */ into load_data select * from load_data;
    
    323284 rows created.
    
    Elapsed: 00:00:00.52
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.00
    SQL> insert /*+ append */ into load_data select * from load_data;
    
    646568 rows created.
    
    Elapsed: 00:00:00.92
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.01
    SQL> drop table big_ind_test purge;
    
    Table dropped.
    
    Elapsed: 00:00:00.09
    SQL> create table big_ind_test nologging as select rownum p_id , x.* from load_data x;
    
    Table created.
    
    Elapsed: 00:00:02.44
    SQL> exec dbms_stats.gather_table_stats(user,'BIG_IND_TEST');
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:00:04.35
    SQL> create bitmap index id_1_bit on big_ind_test(object_id) nologging;
    
    Index created.
    
    Elapsed: 00:00:01.16
    SQL> create bitmap index id_2_bit on big_ind_test(object_type) nologging;
    
    Index created.
    
    Elapsed: 00:00:00.50
    SQL> create bitmap index id_3_bit on big_ind_test(p_id) nologging;
    
    Index created.
    
    Elapsed: 00:00:05.01
    SQL>
    SQL>
    SQL> select count(*) from load_data;
    
      COUNT(*)
    ----------
       1293136
    
    Elapsed: 00:00:01.20
    SQL>
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    Elapsed: 00:00:00.44
    SQL>
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    Elapsed: 00:00:00.07
    SQL>
    SQL> --exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    SQL> set timing on
    SQL> set autot traceonly statistics
    SQL>
    SQL> insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    999999 rows created.
    
    Elapsed: 00:01:01.68
    
    Statistics
    ----------------------------------------------------------
           2535  recursive calls
        9890436  db block gets
        6524474  consistent gets
          25097  physical reads
      663509744  redo size
            832  bytes sent via SQL*Net to client
            863  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              4  sorts (memory)
              1  sorts (disk)
         999999  rows processed
    
    SQL>
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.01
    SQL>
    SQL> --exec dbms_monitor.session_trace_disable;
    SQL> set autot off
    SQL> -----
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    Elapsed: 00:00:00.77
    SQL>
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    Elapsed: 00:00:00.03
    SQL>
    SQL> --exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    SQL> set autot traceonly statistics
    SQL> alter index id_1_bit unusable;
    
    Index altered.
    
    Elapsed: 00:00:00.04
    SQL> alter index id_2_bit unusable;
    
    Index altered.
    
    Elapsed: 00:00:00.03
    SQL> alter index id_3_bit unusable;
    
    Index altered.
    
    Elapsed: 00:00:00.02
    SQL> insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    999999 rows created.
    
    Elapsed: 00:00:03.18
    
    Statistics
    ----------------------------------------------------------
            724  recursive calls
          15443  db block gets
          14531  consistent gets
          14468  physical reads
          70744  redo size
            832  bytes sent via SQL*Net to client
            863  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
             11  sorts (memory)
              0  sorts (disk)
         999999  rows processed
    
    SQL>
    SQL> commit;
    
    Commit complete.
    
    Elapsed: 00:00:00.00
    SQL>
    SQL> alter index id_1_bit rebuild nologging;
    
    Index altered.
    
    Elapsed: 00:00:06.91
    SQL> alter index id_2_bit rebuild nologging;
    
    Index altered.
    
    Elapsed: 00:00:01.45
    SQL> alter index id_3_bit rebuild nologging;
    
    Index altered.
    
    Elapsed: 00:00:11.60
    SQL>
    SQL> --exec dbms_monitor.session_trace_disable;
    SQL> set autot off
    No, that's not a mistake.

    With indexes enabled, it took just over a minute to load the table, and no index maintenance following the load.
    With indexes disabled, it took about 3 seconds to load the table, and the following index maintenance took a sum total of around 20 seconds.

    I think the evidence is convincing.

    Just to make it easier for you to run, without having to clean up the above, here's my script that you can just cut-n-paste, no cleanup required:
    drop table load_data;
    create table load_data nologging as select * from dba_objects;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    insert /*+ append */ into load_data select * from load_data;
    commit;
    drop table big_ind_test purge;
    create table big_ind_test nologging as select rownum p_id , x.* from load_data x;
    exec dbms_stats.gather_table_stats(user,'BIG_IND_TEST');
    create bitmap index id_1_bit on big_ind_test(object_id) nologging;
    create bitmap index id_2_bit on big_ind_test(object_type) nologging;
    create bitmap index id_3_bit on big_ind_test(p_id) nologging;
    
    
    select count(*) from load_data;
    
    alter system flush buffer_cache;
    
    alter system flush buffer_cache;
    
    --exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    set timing on
    set autot traceonly statistics
    
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    commit;
    
    --exec dbms_monitor.session_trace_disable;
    set autot off
    -----
    alter system flush buffer_cache;
    
    alter system flush buffer_cache;
    
    --exec dbms_monitor.session_trace_enable(waits=>true, binds=>true);
    set autot traceonly statistics
    alter index id_1_bit unusable;
    alter index id_2_bit unusable;
    alter index id_3_bit unusable;
    insert /*+ append */ into big_ind_test select rownum p_id , x.* from load_data x where rownum < 1000000;
    
    commit;
    
    alter index id_1_bit rebuild nologging;
    alter index id_2_bit rebuild nologging;
    alter index id_3_bit rebuild nologging;
    
    --exec dbms_monitor.session_trace_disable;
    set autot off
    Try the above, let me know if results differ drastically from my results.

    -Mark

    PS You chose to use bitmap indexes, which is fine (this type of data load is more likely in a data warehouse environment anyhow), but, it really doesn't make much sense to have a bitmap index on that last column, p_id, since it's unique. (driven from rownum so, guaranteed not to have duplicate values.) Again, probably not relevant to the test results, but it struck me as odd, so I mentioned it.
  • 25. Re: What is fastest way to insert millions rows of data to another table
    Randolf Geist Oracle ACE Director
    Currently Being Moderated
    Taral wrote:
    Thanks Randolf for information. But i see major contribution from re-reading table again for fetching data for index
    Third
    ------------
    alter index id_3_bit rebuild
    
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        1      0.00       0.02          0         17          0           0
    Execute      1    235.41     295.65      53651    4716100      19415           0
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        2    235.41     295.67      53651    4716117      19415           0
    
    Elapsed times include waiting on following events:
    Event waited on                             Times   Max. Wait  Total Waited
    ----------------------------------------   Waited  ----------  ------------
    db file sequential read                        65        0.02          0.05
    db file scattered read                        858        1.05         30.32
    latch: cache buffers chains                     1        0.03          0.03
    log file switch completion                      4        0.13          0.24
    direct path write                               2        0.01          0.01
    reliable message                                1        0.00          0.00
    rdbms ipc reply                                 1        0.02          0.02
    log file sync                                   1        0.00          0.00
    SQL*Net message to client                       1        0.00          0.00
    SQL*Net message from client                     1        0.04          0.04  
    Thanks for posting the details.

    I haven't realized yet that you insert far less rows into the table than there are already in the table - this does not correspond to the scenario the OP has outlined.

    In your case where the insert only adds a fraction of the rows the bulk index merge operation is working quite well, notice however that the excess runtime of the third index above is not explained by the waits for re-reading the table. Provided the time to build the third index (which is not really a good candidate for a bitmap index as pointed out by Mark) was comparable to the previous ones, the overall time would be much closer to the insert with indexes enabled.

    Furthermore some of your "multi-block I/O" wait times are quite high - a single "db file scattered read" wait event taking more than a second is rather unusual - which might be another explanation why re-reading the table takes quite long in your particular case.

    In the case of the OP with an empty target table disabling the indexes probably would be the quickest method to load the data.

    Regards,
    Randolf

    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/

    Co-author of the "OakTable Expert Oracle Practices" book:
    http://www.apress.com/book/view/1430226684
    http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684
  • 26. Re: What is fastest way to insert millions rows of data to another table
    Hemant K Chitale Oracle ACE
    Currently Being Moderated
    Taral ,
    alter index id_1_bit rebuild;
    alter index id_2_bit rebuild;
    alter index id_3_bit rebuild;
    Two comments (and Randolf and Mark have already pointed these out, but I am responding now, in my timezone)

    a. You haven't used NOLOGGING for the Rebuild. You haven't used PARALLEL for the Rebuild (or for the INSERT..SELECT either)
    b. A BitMap index on p_id may well be counter-productive. Your statistics also show that the rebuild of this index takes the most effort.


    Hemant K Chitale
    http://hemantoracledba.blogspot.com
  • 27. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Thanks Mark, Hemant and Randolf for all information and make me learn new things
  • 28. Re: What is fastest way to insert millions rows of data to another table
    mbobak Oracle ACE
    Currently Being Moderated
    You've asked 26 questions, and failed to close out 17 of them. Don't forget to mark this question as answered, and appropriately mark replies as correct and/or helpful.

    -Mark
  • 29. Re: What is fastest way to insert millions rows of data to another table
    Taral Journeyer
    Currently Being Moderated
    Sure i will take care of this next time. But this was not asked by me so i can't do anything

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points