Skip to Main Content

LiveLabs & Workshops

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Oracle.ManagedDataAccess -Version 3.1.21.1

user491524Nov 8 2022

My development environment is PowerBuilder linked to an Oracle database. After installing the latest version (PowerBuilder 2022) my web-based project stopped working. Now, trying to do a 'build & deploy' generates the error that "Oracle.ManagedDataAccess -Version 3.1.21.1 cannot be found". I do have version 3.21.50 installed.
Other things we've tried have been unsuccessful, so my next move is to try installing the older version (3.1.21.1) but haven't been able to locate it online. Thoughts?

Comments

Nikolay Savvinov

Hi,

what's your excuse for not using INSERT SELECT here?

Best regards,

  Nikolay

938190

Hi Nikolay,

Thanks for your quick response.

Insert select can be used for inserting into the target_tab but due to the row wise trigger in target_tab ,the performance is still low.

And for inserting into the parent_tab which holds 3 millions records and in this table also a row wise trigger is there -which hinders the performance .

We need a for loop to insert the data into the child_tab for each and every record that is getting inserted in the parent_tab .so here we cant use the insert select ..

Please correct me if i am wrong and can you please suggest the ways to tune this code.

938190

Hi Nikolay,

Thanks for your quick response.

Insert select can be used for inserting into the target_tab but due to the row wise trigger in target_tab ,the performance is still low.

And for inserting into the parent_tab which holds 3 millions records and in this table also a row wise trigger is there -which hinders the performance .

We need a for loop to insert the data into the child_tab for each and every record that is getting inserted in the parent_tab .so here we cant use the insert select ..

Please correct me if i am wrong and can you please suggest the ways to tune this code.

Mike Kutz

I've done 10M rows across parent, child, grandchild in <10s on a 28 disk SAN built in 2000 and a server that is so old that is now has the power of an iPhone 5.

If you want this to go faster, you need to convince the powers-that-be that TRIGGERS ARE EVIL!!

BTW - you should look to see if multi-table INSERT statements can work.

This way it is an "INSERT ALL into Parent .. Child ... SELECT..."

At the very least, use DBMS_PARALLEL_EXECUTE to break down the job instead of FOR LOOPs....

Nikolay Savvinov

Hi,

I still don't see a valid excuse. If triggers make the insert slow, why make it yet slower by doing row-by-row processing? And why not disable triggers prior to inserting?

If you need to insert into child tables, why not do it as INSERT...SELECT as well?

Forgive my being blunt, but you cannot take a piece of garbage and tune it into a piece of cake. You need to use a different approach. In the long-term, if you want a successful career you need to develop a different mindset (think in sets, not arrays).

Best regards,

Nikolay

938190

Hi Nikolay,

The trigger is mandatory as i need to compare the new value to old value and so it cant be disabled.

Thanks ,For inserting into child tables ,i will  use the insert..select

Nikolay Savvinov

Hi,

I'm not quite sure what you mean by comparing the old and new value (it's an INSERT, there is no old value), but whatever or business requirement is, chances are you can implement it without triggers.

Best regards,

  Nikolay

unknown-7404
Its bit urgent

NO - it is NOT urgent. Not to us.

If you have an urgent problem you need to hire a consultant.

I have a performance issue in the below code,

Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.

where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.

Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?

Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.

This code has the EXIT statement at the beginning of the loop instead of at the end

    FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;

              vstep_no  := '4';

              vtable_nm := 'after fetch';

--EXIT WHEN v_rt_all_cols.COUNT = 0;

    EXIT WHEN stg_cursor%NOTFOUND;

The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.

You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.

WHEN OTHERS THEN

  ROLLBACK;

That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.

Your code also has NO exception handling for any of the individual steps or blocks of code.

The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?

As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.

Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.

1. Create a collection for the new parent rows

2. Create a collection for the new child rows

3. For each set of LIMIT source row data

  a. empty the parent and child collections

  b. populate those collections with new parent/child data

  c. bulk insert the parent collection into the parent table

  d. bulk insert the child collection into the child table

And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected.

1 - 8

Post Details

Added on Nov 8 2022
0 comments
616 views