Forum Stats

  • 3,838,817 Users
  • 2,262,400 Discussions


Deleting Records from 100+ tables

User_4LC0G Member Posts: 13 Green Ribbon

Hello Experts,

Oracle DB version : Oracle Database 12c Enterprise Edition Release - 64 bit Production

We have a requirement to delete data from 100+ tables.

We have currently implemented this requirement using DELETE statements on each of the 100+ tables.

After deleting the records from all the tables , we issue a COMMIT.

Below is a Pseudo Code for each table :

     CURSOR c1 is
      SELECT 'x' from tab1 where id = :b1 for update nowait; --acquire row lock


OPEN c1;

DELETE FROM tab1 where id = :b1;




Column ID is indexed.

The current implementation performs poorly. It takes around 1 hour to delete 1000 id's from the 100+ tables in a certain order due to integrity constraints.

Our target is to delete 10,000 id's from all tables.

In order to improve performance we have thought of using BULK COLLECT and FORALL in order to DELETE records.

However, I have the following queries :

1. Could you please suggest how to acquire row level locks on a table while using collections.

2. Could you please suggest how to issue a COMMIT after every 1000 records in order to ensure that I don't run out of the available in the UNDO tablespace.

I would be grateful if you could also suggest a better appraoch apart from the one I mentioned above.

Thank You.



  • L. Fernigrini
    L. Fernigrini Data Engineer Sr Consultant Member Posts: 3,856 Silver Crown

    Just run the DELETE on each table with a WHERE clause that filter the desired rows. 10K rows is pretty small amount. Oracle uses row level locking so you do not need to worry about "page" or "table" lock escalation like in SQL Server.

    You may commit after each table or a set of tables, but I would not worry too much about it, we are talking on deleting 1 million row approximately (10K rows x 100 tables).

  • Sam_P
    Sam_P Member Posts: 155 Blue Ribbon
  • Jan Gorkow
    Jan Gorkow Member Posts: 144 Gold Badge

    Hi @User_4LC0G ,

    I would collect all relevant IDs in one (temporary) table and the delete table by table via

    DELETE FROM tab1 WHERE id IN (SELECT id FROM tmp_ids_to_delete);

    After all table deletes are done one commit finalizes the procedure. You dont't have to lock the rows to delete prior the deletion. In my opinion this is unnecessary overhead. If you have an issue with your UNDO tablespace you could set a commit after every single table deletion. However this would be a bulk processing which should be fast enough to delete much more than 10.000 ids out of 100 tables per hour.

    Best regards


  • User_H3J7U
    User_H3J7U Member Posts: 1,062 Gold Trophy

    on delete cascade

  • Paulzip
    Paulzip Member Posts: 8,720 Blue Diamond

    @L. Fernigrini

    OP may be concerned with lock waits for other sessions if they're doing deletes in a live environment, hence the pessimistic locking model.

    @User_4LC0G As a template, insert your ids into a table to run your delete from and then run something like below (untested). The example is for one table, but perhaps you could use dynamic SQL to scale it or auto-generate it for all your tables. The locking is done in batches to minimise PGA usage. The row locks should remain until you issue a commit.

    -- pre => populate t_id_table with the list of your ids
      c_limit constant pls_integer default 100;
      cursor curtable is 
        select id 
        from t_table 
        where id in (select id from t_id_table)
        for update nowait;
      type ttable is table of curtable%rowtype;
      vtable ttable;
      open curtable;
        fetch curtable bulk collect into vtable limit c_limit; -- pessimistically lock in batches
        exit when vtable.count = 0;
      end loop;
      close curtable;
      delete from t_table where id in (select id from t_id_table);