- 3,715,916 Users
- 2,242,905 Discussions
- 7,845,681 Comments
Forum Stats
Discussions
Categories
- 17 Data
- 362.2K Big Data Appliance
- 7 Data Science
- 1.6K Databases
- 474 General Database Discussions
- 3.7K Java and JavaScript in the Database
- 22 Multilingual Engine
- 487 MySQL Community Space
- 5 NoSQL Database
- 7.6K Oracle Database Express Edition (XE)
- 2.8K ORDS, SODA & JSON in the Database
- 417 SQLcl
- 42 SQL Developer Data Modeler
- 184.9K SQL & PL/SQL
- 21K SQL Developer
- 1.9K Development
- 3 Developer Projects
- 32 Programming Languages
- 135.1K Development Tools
- 9 DevOps
- 3K QA/Testing
- 256 Java
- 6 Java Learning Subscription
- 10 Database Connectivity
- 67 Java Community Process
- 1 Java 25
- 9 Java APIs
- 141.1K Java Development Tools
- 6 Java EE (Java Enterprise Edition)
- 153K Java Essentials
- 135 Java 8 Questions
- 86.2K Java Programming
- 270 Java Lambda MOOC
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 10 Java SE
- 13.8K Java Security
- 3 Java User Groups
- 22 JavaScript - Nashorn
- 18 Programs
- 125 LiveLabs
- 30 Workshops
- 9 Software
- 3 Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 3 Deutsche Oracle Community
- 11 Español
- 1.9K Japanese
- 2 Portuguese
Managed Driver 18.3, 19.3.0, 19.3.1, 19.9.0 Bulk Update problem
Hi all,
I am wrtting a peace of code to send data to the database using batch method. The code works fine when the operation is an insert, but in case of updating the command execution causes a WAIT SHORT TIME and never return.
In both test I am using the same set of data (near 65K registers) and commandtext are built manually.
In case of update seems that the registers are send one by one not as a full batch (but all the information is set into the command before to send to the database).
Are there any know problem related with update data in OracleCommand for this managed version?. Are there any workaround?
Answers
-
I've never seen this problem personally.
How are you conducting the bulk update? Anonymous PL/SQL?
Do all the batch statements successfully execute?
Did this work with the same number of updates using unmanaged ODP.NET in the past?
-
Just to let you know my line of thinking, I'm first trying to determine if the problem lies with the provider or DB and to also characterize what triggers the bug. If it's easier, you can email us an ODP.NET trace to dotnet_us(at)oracle.com and we'll take a look.
-
I am doing directly from code throught OracleCommand.ExecuteNonQuery().
For Insert statement the code works perfectly.
The part of code related with this would be somethig similar to the attached file.
The executed query is like that:
update table2 set where column1 = :column1 and column2 = :column2 and column3 = :columns3 and column4 = :column4
column1,2 and 4 are varchar2, column3 is a number (Int64).
-
The insert command taken a time of 00:00:03.0944938. And I will expected something similar for update process.
Database Server is 12.2c
-
Hello Alex,
The last week I have been upgraded my development environment moving from 12.2c to 19.0c (19.3). I am trying to create the required log file, but at this moment I have not a positive responde.
I have added the following lines in app.config
<oracle.manageddataaccess.client>
<version number="4.122.19.1">
<settings>
<setting name="TraceLevel" value="9"/>
<setting name="TraceOption" value="0"/>
<setting name="TraceFileLocation" value="c:\temp\trace_example.trc"/>
<setting name="TNS_ADMIN" value="C:\Oracle19c\product\19.3.0\dbhome_1\network\admin"/>
</settings>
</version>
</oracle.manageddataaccess.client>
But nothing is writting in the expected location. Any idea?. What am I doing wrong?.
Regards.
-
I am testing the problem and I am replicating the same behaviour. The data table seems to be send (quickly or not, I don't know) but there is no answer from the data base.
This is the log from the insert process
Starting test.
Server: <servername>, Port: 1521, Service: <servicename>.
Read StagingDS.
Read dataset file. Size: Process time: 00:00:03.9721556
Send dataset.
Insert table table1.
Batch Number: 0, Offset: 0, Lenght: 1.
Insert table table2.
Batch Number: 0, Offset: 0, Lenght: 67179.
Insert table table3.
Batch Number: 0, Offset: 0, Lenght: 912.
Sent staging data. Process time: 00:00:05.8123687
Process finished successfully.
Process time: 00:00:14.6125231
But from update, the process waiting indefinably in update table table2 until an Source: Oracle Data Provider for .NET, Managed Driver. Message: ORA-01013 is throwed. The current timeout value is 600sc.
-
Hello Alex, Same behaviour detected using ManagedDriver v18.3 also.
Finally, I am able to generated the trace file (Level had a wrong value). What level do you need?
-
Hello Alex,
Another strange behaviour, after the update process was aborted, I truncated the content of the 4 tables, and relaunch the insert process, but in this case, it takes a lof of time to be finished. No trace file it is been created even it is condfigure in the app.config file.
Our custom log shows:
Starting test.
Server: <server_name>, Port: 1521, Service: <service_name>
Read StagingDS.
Read dataset file. Size: Process time: 00:00:03.7559484
Send dataset.
Insert table table1.
Batch Number: 0, Offset: 0, Lenght: 1.
Insert table table2.
Batch Number: 0, Offset: 0, Lenght: 67179.
Insert table table3.
Batch Number: 0, Offset: 0, Lenght: 912.
Sent staging data. Process time: 00:03:54.9977448
Process finished successfully.
Process time: 00:04:01.0050510
-
The trace level should be set to 7 and the trace file location should be set to a directory name, not a file name. From the trace, we can see why the update is taking so long.
I'm curious whether a fewer number of updates batched helps characterize the problem of control never returning to the app. Can you try updating just one row and checking whether program control returns?
I've seen updates take much longer if data types are mismatched. For example, if you try to update a VARCHAR2 with an NVARCHAR2, the DB will have to conduct a data type conversion for each value that is checked against a current DB value.
-
Hi Alex.
We use only varchar2 but maybe other conversion could be internally done.
I send you the trace now and I will check you requested test and back again with the result asap.
I send you a 7z file.
-
Hello Alex, the e-mai has been rejected due to size limit. The size of the file is: 3.4MB. What is the max size supported be your e-mail account?
-
I have yet executed your test with a positive response, after modify 5 registers they will successfully updated and the control return to the app without problem.
-
Hello Alex,
I can not send you the logs. I have split the original file in two, one of 1.3Mb. and another of 0.5Mb. but have been rejected: Message Size Violation .
-
The email account can receive attachments in excess of 10 MB. Perhaps rename the extension to something else. I'm not sure if they've changed our email filters on the Oracle end, but sometimes there are email filters on the customer side that could be blocking the attachment.
Since 5 updates work, the problem is either a large number of updates or the slow performance just makes it seem like control never returns before the command timeout occurs (or you lose patience).
One thing I need to check that I haven't yet is whether there is an upper limit to the number of updates we can do. 65K may exceed some limit we can support.
-
Hello Alex, I have renamed the file extension adding .txt but it doesn't work.
In other hand I am checking the same process with a batch size of 65000, before the value was 200000 (for insert works fine).
-
Apologize Alex, but I am not able to send you the trace files. Server allways rejected the emails.
Could I send you in another way?
-
You can share the trace on GitHub with me (handle = alexkeh) or something like Box or DropBox with the email handle above. Alternatively, you can open up an SR with Oracle Support to make it easy to share files securely.
-
Hello Allex,
I send you a link to download the trace files.
I have checked the process decreassing the batch size value to 10000. The process has finished successfully but the performance os very, very poor.
[2020-10-26 17:50:59.154] [INF]. Starting test.
[2020-10-26 17:50:59.270] [INF]. Server: <server>, Port: 1521, Service: <service>.
[2020-10-26 17:51:00.657] [INF]. Read StagingDS.
[2020-10-26 17:51:04.080] [INF]. Table table1, Rows: 1.
[2020-10-26 17:51:04.080] [INF]. Table table2, Rows: 67179.
[2020-10-26 17:51:04.080] [INF]. Table table3, Rows: 912.
[2020-10-26 17:51:04.080] [INF]. Table table4, Rows: 0.
[2020-10-26 17:51:04.080] [DBG]. Read dataset file. Size: Process time: 00:00:03.4225649
[2020-10-26 17:51:06.924] [INF]. Send dataset. Batch Size: 10000
[2020-10-26 17:51:06.939] [INF]. Registers to insert in table1: 0.
[2020-10-26 17:51:06.939] [INF]. Registers to update in table1: 0.
[2020-10-26 17:51:06.939] [INF]. Registers to delete in table1: 0.
[2020-10-26 17:51:06.955] [INF]. Registers to insert in table2: 0.
[2020-10-26 17:51:06.970] [INF]. Registers to update in table2: 67179.
[2020-10-26 17:51:06.970] [INF]. Registers to delete in table2: 0.
[2020-10-26 17:51:12.635] [INF]. Update table table2.
[2020-10-26 17:51:12.869] [INF]. Columns to update: 3.
[2020-10-26 17:51:12.932] [INF]. Batch Number: 0, Offset: 0, Lenght: 10000.
[2020-10-26 17:55:15.043] [INF]. Batch Number: 1, Offset: 10000, Lenght: 10000.
[2020-10-26 17:58:55.899] [INF]. Batch Number: 2, Offset: 20000, Lenght: 10000.
[2020-10-26 18:02:42.095] [INF]. Batch Number: 3, Offset: 30000, Lenght: 10000.
[2020-10-26 18:06:42.745] [INF]. Batch Number: 4, Offset: 40000, Lenght: 10000.
[2020-10-26 18:10:20.579] [INF]. Batch Number: 5, Offset: 50000, Lenght: 10000.
[2020-10-26 18:14:19.973] [INF]. Batch Number: 6, Offset: 60000, Lenght: 7179.
[2020-10-26 18:16:55.061] [INF]. Registers to insert in table3: 0.
[2020-10-26 18:16:55.061] [INF]. Registers to update in table3: 912.
[2020-10-26 18:16:55.061] [INF]. Registers to delete in table3: 0.
[2020-10-26 18:16:55.139] [INF]. Update table table3.
[2020-10-26 18:16:55.161] [INF]. Columns to update: 3.
[2020-10-26 18:16:55.161] [INF]. Batch Number: 0, Offset: 0, Lenght: 912.
[2020-10-26 18:16:56.357] [DBG]. Sent staging data. Process time: 00:25:49.4332096
[2020-10-26 18:16:56.357] [INF]. Process finished successfully.
[2020-10-26 18:16:56.363] [DBG]. Process time: 00:25:58.1122552
-
Hello Alex,
I have rewrote the code to use OracleDataAdapter class, but the behaviour is the same, all the information for the update process seems to be send one be one not using batch configuration. Futhermore, more errors appears, related with mapping columns and null reference (the code is simillar to the one use to send the data through OracleCommnad).
I guess, I am doing something wrong but I don't know how to continue .
This is a huge performance penalty in our application.
-
Hello Alex,
The same behavior is replicated for delete operations in batch mode.
-
I filed bug 32086003 to further investigate the issue.
To ensure I understand the performance difference, 67K inserts take 3.1 seconds to complete execution. 10K updates take about 26 minutes to execute. Is that correct?
-
Hello Alex,
Yes, you are right.
Take into a look this table.
Futhermore, when we send data to be deleted in the same way, we get the same response (take a lot of time to be finished).
Times are consistent with batch size.
-
Hello Alex,
Yesterday I was working with two DBAs review this problem and they found an error during my tests. Before to execute insert test I allways did a truncate over all tables but I never rebuilt statistics and they seen that the execution plan was wrong.
We rebuilt statistics and executed again all the steps, the result has improved the performance, for both cases, update and delete in batch mode.
For instance:
updated 67K registers: 00:00:01.4431899
deleted 67K registers: 00:00:02.9927209
The point to be keep in mind, you comment previous, for updates batch size can not be greater than 65K, with greater values process hangs and there are no response. This limit should be used also for delete process? I don't test yet.
This new test have been executed on Oracle 19c (19.4) with Oracle.ManagedDriver 19.3.1.
Many thanks for your support and apologice for any inconvenience due to my mistake.
-
Hello Alex,
the update size limit doesn't affect to delete process. I have tested with a batch size of 250K and the process finished successfully.
-
Thanks for letting us know, David! I'm glad you were able to resolve the problem successfully.