user9093343 wrote:The primary question you should be asking is, why am i using BULK COLLECT (PL/SQL) and not just doing it in direct SQL?
In most of the circumstances., I feel that the bulk collect script processes good enough but slows down after a given set of records are processed.
I was trying to load 14 million records through FORALL insert - it loads 10 Million records in 10 mins but the process slows down there on and for the remaining 4 million records., it takes more than 2 hours.
I have tried different set of Limits (100,500,1000,10000) and but still experience the same problem.
Could you please help me understand the actual problem?
user9093343 wrote:You mean the UNDO. I have heard that before. And you will be issuing COMMIT in the loop, correct? Its as bad as it gets, I hope you have a very good restart mechanism in place incase the process fails in the middle otherwise you will be left with messed up data. Oh and yes forgot to mention about [url http://docs.oracle.com/cd/B19306_01/server.102/b14219/e1500.htm#sthref833]ORA-1555
Since the record count is in millions for this dml operation., we thought of using bulk collect instead of direct SQL. We have very limited tablespace allocated and so., we usually end up with tablespace problems.
user9093343 wrote:Not very efficient code - and written in ugly uppercase too.
Please find my sample code below.
The worst part is the exception handling.
DECLARE CURSOR cur_test IS SELECT funct_1(col1) as col1, funct_2(col_2) as col2, col3, col4 from table1; TYPE v_number IS TABLE OF NUMBER INDEX BY PLS_INTEGER; TYPE v_varchar IS TABLE OF VARCHAR2 (2000) INDEX BY PLS_INTEGER; v_col1 v_number; v_col2 v_number; v_col3 v_varchar; v_col4 v_varchar; BEGIN DBMS_OUTPUT.put_line ( 'Started at: ' || TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss') ); OPEN cur_test; LOOP FETCH cur_test BULK COLLECT INTO v_col1, v_col2, v_col3, v_col4 LIMIT 100; EXIT WHEN v_col1.COUNT = 0; FORALL i IN 1 .. v_col1.COUNT INSERT INTO TABLE2 (col1, col2, col3, col4 ) VALUES (v_col1 (i), v_col2 (i), v_col3 (i), v_col4 (i), SYSDATE ); COMMIT; END LOOP; CLOSE cur_test; DBMS_OUTPUT.put_line ( 'Ended at: ' || TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss') ); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ( '[' || TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss' || ']:' ) || ' - [Loading TABLE2 table] - ABORTED.' || SQLERRM); ROLLBACK; END; /
If you have UNDO space issue work with your DBA and get what you want. Additional suggestion try to remove the functino funct_1 and funct_2 and incorporate the logic in the SELECT directly.
begin dbms_output.put_line ('started at: '|| to_char (sysdate, 'mm/dd/yyyy hh24:mi:ss')); insert into table2 ( col1, col2, col3, col4 ) select funct_1(col1) as col1, funct_2(col_2) as col2, col3, col4 from table1; commit; dbms_output.put_line ('ended at: '|| to_char (sysdate, 'mm/dd/yyyy hh24:mi:ss')); end; /
Billy Verreynne wrote:Easy to explain : We all know database is evil.
How is sending the SQL data via a detour from source to target table, better?
user9093343 wrote:No. I think you are confused by being worried about an irrelevance.
DIRECT SQL(insert into... select * from...) takes more than 2 hours and it keeps on running. most of the times I land into temp space issue also.
But the way I have written here processes almost 10 millions in 10 seconds. I am worried about the remaining 1.4 millions only.