Skip to Main Content

Java Database Connectivity (JDBC)

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Records not getting inserted properly into the table in oracle.

843859Nov 12 2008 — edited Nov 20 2014
We are inserting approximately 80000 records in a single batch insert. But only 5000 records are getting inserted successfully. But if the record count is less than 70000, there are no issues.

Even there are no exceptions when executing the process.

We are using Weblogic 8.1.3. Is there any limit on the number of records that could be inserted in a batch insert.?

Any help is highly appreciated.

Comments

796254
a.santhosh wrote:
We are inserting approximately 80000 records in a single batch insert. But only 5000 records are getting inserted successfully. But if the record count is less than 70000, there are no issues.

Even there are no exceptions when executing the process.

We are using Weblogic 8.1.3. Is there any limit on the number of records that could be inserted in a batch insert.?

Any help is highly appreciated.
are you trying to commit all of them as a single unit of work? could be a transaction buffer that's exceeded.

smaller commits might help.

%
796254
a.santhosh wrote:
We are inserting approximately 80000 records in a single batch insert. But only 5000 records are getting inserted successfully. But if the record count is less than 70000, there are no issues.

Even there are no exceptions when executing the process.
i doubt this. sure you just aren't catching and handling them?

>
We are using Weblogic 8.1.3. Is there any limit on the number of records that could be inserted in a batch insert.?
why is weblogic involved at all?
Any help is highly appreciated.
why do you use java to do this? why not an etl tool or imports that your database has?


%
843859
May be a DB or JDBC driver limitation. Who knows?

Use batches of batches of 70000 instead.

But have you considered using the DB's own import and export tools? Inserting that much records using Java is a bad smell.
843859
Yes, I am catching all the exception, but none is thrown back,

Also, am committing only once, after all the batch is executed.
843859
Yes, I am make the change to insert splitting the records into 2 batches, But without knowing the exact reason for the issue, how can we make the change.

The insert is happening after doing some manipulation on the records in JAVA, so the option of the etl tools doesn't come into the picture.
796254
a.santhosh wrote:
Yes, I am catching all the exception, but none is thrown back,
so what does the database log tell you?
Also, am committing only once, after all the batch is executed.
the error tells you this is a bad idea. smaller commits, please.

%
843859
a.santhosh wrote:
Yes, I am make the change to insert splitting the records into 2 batches, But without knowing the exact reason for the issue, how can we make the change.
As said, maybe a DB or JDBC driver limitation (or bug).
The insert is happening after doing some manipulation on the records in JAVA, so the option of the etl tools doesn't come into the picture.
Then consider writing it to a SQL file and let the DB import it.
843859
If we are not at all bothered about the Performance (No Batch Update) then we can have 80k rows record by record inserts and a single commit.
Reference : http://www.javaperformancetuning.com/tips/jdbc.shtml

1)Insert to succeed needs more TABLE SPACE more the tablespace better the execution.
2)Each record while Update or Insert operation will be buffered by DB engine so that it can rollback when ever we call con.rollbackTransactions(); - So it has to store all the 80k statements and be ready for commit or rollback. if it is unable to maintain this data it will through an exception.
3)It can also depend on the connection pooling you are using.
843859
a.santhosh wrote:
Yes, I am make the change to insert splitting the records into 2 batches, But without knowing the exact reason for the issue, how can we make the change.
Add 1 to a record counter and after a certain count is reached, commit. Rather than just two batches, I think you might chose to commit after each 1000 rows or so; and run a log-file out that shows the last row inserted, the count and then commit, so that if it still fails you know exactly where you left off w/o running a separate SQL inquiry.
The insert is happening after doing some manipulation on the records in JAVA, so the option of the etl tools doesn't come into the picture.
a.santhosh wrote:
Yes, I am make the change to insert splitting the records into 2 batches, But without knowing the exact reason for the issue, how can we make the change.
Did you test the return value (array) from executeBatch() to determine that nothing failed.
The insert is happening after doing some manipulation on the records in JAVA, so the option of the etl tools doesn't come into the picture.
That statement contains an incorrect idea.

The fact that java manipulates the data does not preclude you from using the tools. You need do nothing more than create an appropriate file after manipulating the data.
843859
The return value of the Batch is -2 (SUCCESS_NO_INFO). Using whcih we are not able to determine how many records had processed.
843859
a.santhosh wrote:
The return value of the Batch is -2 (SUCCESS_NO_INFO). Using whcih we are not able to determine how many records had processed.
Several posters have recommended more frequent commits, as well as other things that should help you. I don't understand why you appear reluctant to try some of these.
843859
I am open to make the change, but the piece of code is running in production for almost 15 months, So wanted to find the exact & concrete answer. So that i can make the change.

Also, the flow of request started increasiing the recent months,

Thanks
843859
a.santhosh wrote:
I am open to make the change, but the piece of code is running in production for almost 15 months, So wanted to find the exact & concrete answer. So that i can make the change.

Also, the flow of request started increasiing the recent months,

Thanks
We're going back and forth about this and really aren't getting very far are we? You are in the best position to test this out, and testing will likely get you the answer. Simply thinking and asking others to think, without doing any testing and debugging is a poor substitute.
DrClap
a.santhosh wrote:
I am open to make the change, but the piece of code is running in production for almost 15 months, So wanted to find the exact & concrete answer. So that i can make the change.
But you do have a test server where you can try out potential changes, don't you?
843859
Yes, I do have the test server, But I am trying a lot to stimulate the problem, b/c in test env, it is very hard to process thousands of record at a time, I am think of taking the prod request and flowing it in the test,
a.santhosh wrote:
Yes, I do have the test server, But I am trying a lot to stimulate the problem, b/c in test env, it is very hard to process thousands of record at a time, I am think of taking the prod request and flowing it in the test,
Huh?

Write a program that generates thousands of records.
843859
I made the code change to commit every 10,000 records. It worked perfectly fine.

Suppose i have 100000 records to insert , i have 10 commits in this case, but the issue is if the exception happens somwhere after 50000 records, am not able to rollback the comitted records.
843859
a.santhosh wrote:
I made the code change to commit every 10,000 records. It worked perfectly fine.

Suppose i have 100000 records to insert , i have 10 commits in this case, but the issue is if the exception happens somwhere after 50000 records, am not able to rollback the comitted records.
This is true - and could be looked at as a down-side. That's why I it was suggested that you log your commits so that you know EXACTLY where you left off and what the last committed row was.

Then there is this - committing more frequently might (as is the intention) prevent failure in the first place ... which is certainly most preferable.
a.santhosh wrote:
I made the code change to commit every 10,000 records. It worked perfectly fine.

Suppose i have 100000 records to insert , i have 10 commits in this case, but the issue is if the exception happens somwhere after 50000 records, am not able to rollback the comitted records.
So?

If you have 100,000 records that logically need to be in a single commit then I suggest you look at a different design and perhaps architecture as well.

For example, put the records into a "temporary" table and process them in the database to move them to the final destination.
1 - 20
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Dec 15 2008
Added on Nov 12 2008
20 comments
686 views