user9093343 wrote:This is incredibly hard to believe. How are you measuring the exact time taken?
Also as I told earlier..."Insert into... select" is taking too much time compared to bulk processing in my case.:(
user9093343 wrote:The same basic approach can be done in 10g.
DBMS_PARALLEL_EXECUTE is a 11g feature. we are still in 10g. :(
You now need to run this procedure 10x, once per rowid range.
create or replace procedure ProcessRowRange( fromRow varchar2, toRow varchar2 ) is .. begin .. insert into target_table values( .. ) select ... from source_table where rowid between fromRow and toRow; .. end;
Also as I told earlier..."Insert into... select" is taking too much time compared to bulk processing in my case.:(Tell me why do you think that an insert...select is slower? Does it read more rows? Does it write more rows? Surely the I/O load is the same.
user9093343 wrote:As DBA, I have a super-user schema that has specific system access. Like select any dictionary. I then implement code (e.g. function returning rowid ranges) in that schema with authid definer, and give an app/user schema execute rights on it. This allows an app or user access to specific super-user functionality in a controlled and managed fashion.
Billy - have requested access to the dba tables to try your suggestion in 10g. will keep you posted.
Also - Just out of curiosity - whenever I use bulk collect (& FORALL) in any situations, I see the last 10 to 20% of records take a lot of time compared to the initial 80% of records loaded. any idea?Basic reaction is that you are misdiagnosing that - it may be what you think you are seeing, but is not.