Forum Stats

  • 3,752,633 Users
  • 2,250,529 Discussions
  • 7,867,903 Comments

Discussions

What is the best way to delete millions of row from large table?

2

Answers

  • BEDE
    BEDE Oracle Developer Member Posts: 2,277 Gold Trophy
    edited Jul 23, 2018 8:49AM

    Then, you may have a problem if really many transactions have to take place on the same table. Then, in order to avoid a possible deadlock you may do something like:

    begin

       delete from x_table where ... and rownum<=1000;

       exit when sql%rowcount=0;

       commit;

    end;

    That will last quite long, but it will certainly do the job.

  • Sevdimali Isayev
    Sevdimali Isayev Member Posts: 123 Red Ribbon
    edited Jul 23, 2018 8:57AM

    Thanks for sugesstion. I think we wil choose partitioning the table.

    BrunoVroman
  • Cookiemonster76
    Cookiemonster76 Member Posts: 3,410
    edited Jul 23, 2018 9:02AM

    A process that only deletes from one table shouldn't ever cause a deadlock.

  • BEDE
    BEDE Oracle Developer Member Posts: 2,277 Gold Trophy
    edited Jul 23, 2018 9:03AM

    But just how do you partition the table if several sessions need to access that? You must have exclusive use of the table for any DDL. So, for partitioning the table pick the right time when you may do that.

  • Sevdimali Isayev
    Sevdimali Isayev Member Posts: 123 Red Ribbon
    edited Jul 23, 2018 9:06AM

    I think they will accept, one downtime for a life for this situation.

  • Joerg.Sobottka
    Joerg.Sobottka Senior Consultant and Oracle Ace Member Posts: 595 Bronze Trophy
    edited Jul 23, 2018 9:14AM

    You can partition the table using DBMS_REDEFINITION. Then you don't have any downtime.

    Sevdimali IsayevBrunoVroman
  • Sven W.
    Sven W. Member Posts: 10,529 Gold Crown
    edited Jul 23, 2018 9:18AM
    Cookiemonster76 wrote:A process that only deletes from one table shouldn't ever cause a deadlock.

    It shouldn't, but it can happen. Especially when the delete runs in parallel.

  • SeánMacGC
    SeánMacGC Member Posts: 2,914 Gold Trophy
    edited Jul 23, 2018 10:48AM
    Sven W. wrote:Cookiemonster76 wrote:A process that only deletes from one table shouldn't ever cause a deadlock.It shouldn't, but it can happen. Especially when the delete runs in parallel.

    And particularly if there are bitmap indexes involved.

  • Cookiemonster76
    Cookiemonster76 Member Posts: 3,410
    edited Jul 23, 2018 11:07AM

    If they've got bitmaps on a table that gets 50-60 new rows per second then they're likely in for a world of pain.

    SeánMacGC
  • Mark D Powell
    Mark D Powell Member Posts: 5,914 Blue Diamond
    edited Jul 23, 2018 11:44AM

    Sevdimali, "best" is a relative term.  What is best for me may no be best for you.  The best solution also depends on if this is a one-time operation, or the first DELETE of all the old data and a regular DELETE process will take its place, etc....  For initial purges where the quantity of data is larger than what will normally be purged and one-time purges I do not think speed of the DELETE should be a primary consideration.  Instead what really matters is that the purge process not impact the system.

    - -

    The following is a simple but effective approach where you want to limit the impact.  Use PL/SQL, write a cursor to Select the PK of the target rows, Loop through the cursor, delete by PK, commit every N rows to limit the UNDO usage.  N is determined by the amount of UNDO you determine to allow this process to consume.  Consider putting a sleep between commits or every X number of commits if you need to allow the deleted data to age out of UNDO.

    - -

    IMHO -- Mark D Powell --

    Sevdimali Isayev
This discussion has been closed.