user8837073 wrote:There are only a few reasons to implement cursor loops. One is to avoid filling up rollback/undo on massive bulk DML. Another is to avoid contention issues to to locking. Although the bulk SQL should itself be more efficient your approach to loop and do periodic commits is worth thinking about for the reasons you mentioned.
I would like to know how to enhance a merge statement so as to invoke a commit statement for every N number of records updated ... ???
Perform as many DML operations as possible to avoid overhead from too many commits. Again, this will not be as efficient as a single bulk DML statement but should deal with the locking issues you mentioned. Watch performance carefully!
foreach record perform dml use mod() or remainder() to perform periodic commit
You should experiment with k_commit and maybe k_limit, too. Above will commit every 10.000.000 records
declare k_limit constant number := 1000; k_commit constant number := 10000; v_counter number := 0; -- Cursor and Collection types, here begin loop fetch c bulk collect into col1_t, col2_t, col3_t limit k_limit; exit when t.count = 0; counter := counter +1; forall i in 1 .. col1_t.count merge into standard t using (select col1_t(i) pk, col2_t(i) col2, col3_t(i) col3 from dual) s on (t.pk = s.pk) when matched then update ... when not matched then insert ...; if counter = k_commit then commit; end if; end loop; commit; end; /