This discussion is archived
1 2 Previous Next 16 Replies Latest reply: Dec 6, 2012 8:26 AM by rp0428 RSS

Purging table which is having 98 M rows

748829 Newbie
Currently Being Moderated
Hi,

I want to purge a table which is having more then 98M rows...here are the details...

Purge Process I followed
---------------------------------------------
Step 1. Created backup table from Main table with required data only
create table abc_98M_purge as
select * from abc_98M where trunc(tran_date)>='01-jul-2011'
-> table created with 5325411 rows

Step 2. truncate table abc_98M

Step 3. inserted all 5325411 rows back to abc_98M from abc_98M_purge using below procedure

DECLARE
TYPE ARRROWID IS TABLE OF ROWID INDEX BY BINARY_INTEGER;
tbrows ARRROWID;
row PLS_INTEGER;
cursor cur_insert is select rowid from abc_98M_purge order by rowid;
BEGIN
open cur_insert;
loop
fetch cur_insert bulk collect into tbrows limit 50000;
FORALL row IN 1 .. tbrows.count()
insert into abc_98M select * from abc_98M_purge where rowid = tbrows(row);
commit;
exit when cur_insert%notfound;
end loop;
close cur_insert;
END;

Problem
-----------------------------
It took 4 hours to insert 5325411 rows by Step 3.

Please suggest better approach, so that my downtime will reduce.
  • 1. Re: Purging table which is having 98 M rows
    JohnWatson Guru
    Currently Being Moderated
    Hi - am I missing something? The obvious answer is that steo 3 should be
    insert into abc_98M select * from abc_98M_purge;
    and by the way, you have a bug waiting to happen in your method, because you are relying on implicit type casting here,
    where trunc(tran_date)>='01-jul-2011'
    . This structure may also inhibit the use of an index.
  • 2. Re: Purging table which is having 98 M rows
    Osama_Mustafa Oracle ACE
    Currently Being Moderated
    the best way using CTAS_ .

    Read about it .
  • 3. Re: Purging table which is having 98 M rows
    Nikolay Savvinov Guru
    Currently Being Moderated
    hi,

    1) as already suggested here, don't use a PL/SQL loop where a single INSERT statement could work
    2) use "append" (direct path inserts) hint
    3) parallelize the workload if you have enough free CPUs available (parallel hint)
    4) make sure all triggers on the table are disabled
    5) same about constraints
    6) indexes slow down DML operations considerably -- so if measures listed above fail, you may have to drop
    and re-create the indexes after the insert

    Best regards,
    Nikolay

    Edited by: Nikolay Savvinov on Dec 2, 2012 2:51 PM
  • 4. Re: Purging table which is having 98 M rows
    748829 Newbie
    Currently Being Moderated
    hi all, thanks for ur post.

    @JohnWatson,

    1. this is giving me some error related to temp table space. and i could not increase it as dont have anymore physical space left.

    2. that part is not the problem. as im being able to create abc_98M_purge from abc_98M within 2 minutes...only problem is inserting it back.


    @Osama_mustafa
    will u please paste me some link to refer...that would be very helpful to me.


    @Nikolay Savvinov
    1) as already suggested here, don't use a PL/SQL loop where a single INSERT statement could work
    -> as i explianed above, it did nt worked

    2) use "append" (direct path inserts) hint
    -> please elaborate it if possible, as im new to all this

    3) parallelize the workload if you have enough free CPUs available (parallel hint)
    -> please elaborate it if possible, as im new to all this

    4) make sure all triggers on the table are disabled
    -> its disabled

    5) same about constraints
    -> its disabled

    6) indexes slow down DML operations considerably -- so if measures listed above fail, you may have to drop
    and re-create the indexes after the insert
    -> is this safer approach!

    Edited by: saanp on Dec 2, 2012 7:01 AM
  • 5. Re: Purging table which is having 98 M rows
    Nikolay Savvinov Guru
    Currently Being Moderated
    hi

    >
    @Nikolay Savvinov
    1) as already suggested here, don't use a PL/SQL loop where a single INSERT statement could work
    -> as i explianed above, it did nt worked
    it's surprising that you ran out of TEMP space -- you could expect it if the query had an expensive join or
    sort, but for a plain insert/select statement it's surprising. can you post the execution plan? also, what's your
    Oracle version, and do you have a Diagnostic Pack License?

    2) use "append" (direct path inserts) hint
    -> please elaborate it if possible, as im new to all this
    just add the hint like shown here: insert /*+ append */

    but first read the documentation about direct path inserts -- they involve certain limitations,
    so make sure that they're applicable in your case
    3) parallelize the workload if you have enough free CPUs available (parallel hint)
    -> please elaborate it if possible, as im new to all this
    read documentation on parallel execution and on "parallel" hint
    6) indexes slow down DML operations considerably -- so if measures listed above fail, you may have to drop
    and re-create the indexes after the insert
    -> is this safer approach!
    if done properly, there's nothing unsafe about it.

    There is one thing you can try to do to find the root cause of the slowness -- trace the session (or use ASH if you do have the Diagnostic Pack license) to see which object the database is reading to/writing from.

    Best regards,
    Nikolay
  • 6. Re: Purging table which is having 98 M rows
    rp0428 Guru
    Currently Being Moderated
    Welcome to the forum!

    Whenever you post provide your 4 digit Oracle version.
    >
    Step 1. Created backup table from Main table with required data only
    create table abc_98M_purge as
    select * from abc_98M where trunc(tran_date)>='01-jul-2011'
    -> table created with 5325411 rows

    Step 2. truncate table abc_98M

    Step 3. inserted all 5325411 rows back to abc_98M from abc_98M_purge using below procedure

    It took 4 hours to insert 5325411 rows by Step 3.

    Please suggest better approach, so that my downtime will reduce.
    >
    Have you licensed the partitioning option?

    The method above is not the best performant method if you can use partitioning. The partitioned method would be:

    1. Do a CTAS (CREATE TABLE AS SELECT) that creates a partitioned table and loads it with the data you want to keep.

    2. The CTAS table should be created in the whatever tablespace you want your original table to use. Use the same tablespace that your source table is in or you can use a different one if you want to move your source table data to a different tablespace.

    3. Add the same constraints to the new table that the current table has.

    4. Do a partition exchange of the current table with the new partitioned table. Since this is a metadata only update it will only take a fraction of a second. Your current table will now have the data you want to keep and your new partitioned table will have ALL data that that the original table used to have.

    5. Do whatever you want with the partitioned table.

    If you cannot start over and use the partitioned approach you have a couple of choices for finishing the method you started.

    A. Use the APPEND hint to use a bulk insert as Nikolay suggested
    insert /*+ append */ into abc_98M select * from abc_98M_purge;
    You could also disable logging before the insert and then reenable it afterward to eliminate most of the REDO logging.
    alter table abc_98M NOLOGGING;
    insert /*+ append */ into abc_98M select * from abc_98M_purge;
    alter table abc_98M LOGGING;
    commit;
    If you disable logging you will NOT be able to recover that data in the event of a problem so you need to take an immediate backup of that table after the insert.

    If you don't expect modifications before your next normal backup (e.g. that night) then your 'purge' table will still have the data and you could recover the data from it.
  • 7. Re: Purging table which is having 98 M rows
    748829 Newbie
    Currently Being Moderated
    hi,
    encountered this error:

    SQL> insert /*+ append */ into abc_98M select * from abc_98M_purge;
    insert /*+ append */ into abc_98M select * from abc_98M_purge
    *
    ERROR at line 1:
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'


    SQL>



    Note: Oracle Version
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

    Edited by: saanp on Dec 2, 2012 9:49 PM
  • 8. Re: Purging table which is having 98 M rows
    Nikolay Savvinov Guru
    Currently Being Moderated
    Hi
    saanp wrote:
    hi,
    encountered this error:

    SQL> insert /*+ append */ into abc_98M select * from abc_98M_purge;
    insert /*+ append */ into abc_98M select * from abc_98M_purge
    *
    ERROR at line 1:
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'
    So it's not TEMP, it's UNDO. Ok, that does make sense -- deletes generate lots of UNDO,
    so you have to make sure you have enough before you run your script. The amount of UNDO
    required to complete this operation has the same order of magnitude as the size of the table
    you are deleting.

    Best regards,
    Nikolay
  • 9. Re: Purging table which is having 98 M rows
    748829 Newbie
    Currently Being Moderated
    ...and im not able to extend undo tablespace as i dont have physical space left on my disk.


    please suggest.
  • 10. Re: Purging table which is having 98 M rows
    Nikolay Savvinov Guru
    Currently Being Moderated
    Hi,
    saanp wrote:
    ...and im not able to extend undo tablespace as i dont have physical space left on my disk.
    free some space in the database -- drop/truncate unused data (e.g. manual backup tables), use compression, rebuild tables with suboptimal physical structure, shrink oversized tablespaces -- I'm sure that'll free enough space to increase the UNDO as needed.

    In the unlikely even that measures listed above don't help -- buy some disk space.

    Best regards,
    Nikolay
  • 11. Re: Purging table which is having 98 M rows
    rp0428 Guru
    Currently Being Moderated
    >
    and im not able to extend undo tablespace as i dont have physical space left on my disk.
    >
    Then you can't use an approach that requires undo.

    Turn your 'purge' table into the real one. Add the constraints, triggers, indexes, etc. Then drop the original table and rename your 'purge' table to have the orginal name.
  • 12. Re: Purging table which is having 98 M rows
    748829 Newbie
    Currently Being Moderated
    i tried in other way...

    1. created a backup table[abc_98M_purge] with required data.
    2. dropped original table[abc_98M]
    3. renamed backup table as dropped table [abc_98M_purge to -> abc_98M]
    4. created index

    this process seems working fine...however, i have one concern here...in step 2. when i dropped the original table. still there are some space occupied by index tablespace attached to that table...now my question is how to release that space!
  • 13. Re: Purging table which is having 98 M rows
    sb92075 Guru
    Currently Being Moderated
    saanp wrote:
    ..now my question is how to release that space!
    release space from where to where?

    Total disk space remains constant unless or until disk is added or removed.
  • 14. Re: Purging table which is having 98 M rows
    rp0428 Guru
    Currently Being Moderated
    >
    i tried in other way...

    1. created a backup table[abc_98M_purge] with required data.
    2. dropped original table[abc_98M]
    3. renamed backup table as dropped table [abc_98M_purge to -> abc_98M]
    4. created index
    >
    That is pretty much what I said.
    >
    this process seems working fine...however, i have one concern here...in step 2. when i dropped the original table. still there are some space occupied by index tablespace attached to that table...now my question is how to release that space!
    >
    If the table is gone then so is the index. That space will get reused by other objects that use that same tablespace.
1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points