I need some advice on how to optimize the Load Data step from the load code template (LCT_SQL_TO_ORACLE)? I have some tables with 1 mil rows and just performing this task takes about 30 min (the table also contains lots of clob columns). Maybe the sqljdbc.jar driver is too slow but I doubt I can use another driver like the jDTS driver (which I've heard it's faster). I have to load data from a SQL Server 2008 R2.
What can I do? Thank you!
Have you tried tuning the SQL Server JDBC connection URL to the max first off? There is a post here that talks about a specific problem with larger volumes which is doesn't sound like you have hit, but first off I'd make sure you have tried different options from the Microsoft JDBC driver.
The other option is the bulk unload code templates that are in OWB 220.127.116.11, these just use Microsoft/DB2 bulk unloading features and then uses external tables to stage in Oracle. These are generally much faster than any JDBC route.
Thanks for your fast answer. I have already increased the heap size and also modified the Url Template but it;s still slow. I will try the second option. I was also wandering if I should use the APPEND PARALEL option from the tables operators hints in the mapping configuration. I've just started looking into them so I don't know much about how they work but it would help me to hear your opinion. Thank you again!
Edited by: 948494 on 10.09.2012 09:18
Are you using 18.104.22.168 of OWB?
If so there were also some additional parameters added to the URL which can impact the performance. There were 2 properties added; Array Fetch Size and Batch Update Size. If you read through the following observations of the fetch size for example you can see it dramatically can change performance.
So setting these properties in OWB location and then re-regsitering/deploying your locations and connectors will help here.
Yes, I am using Owb 22.214.171.124
Thank you very much for the advice, I am planning on modifying the URL Template line as you say but there is still something I don't understand. I have tables with 1 mil rows and tables with 22 rows. Setting the array fetch size parameter to let's say 50 000, as in the example, would have a negative impact on tables with a small amount of data (under 10 000)?
I apologize for the large number of questions.
Edited by: 948494 on 10.09.2012 10:01
Thank you very much for these tips to improve performance.
In modifying the java heap size I have modified the owbclient file and set a value of 1024 for this 2 parameters: -XX:MaxPermSize and -Xmx1024m.
Am I supposed to modify the ccastart file or the setowbenv file? I'm not very clear on that.
I've read the Oracle doc on the java out of memory issue and it only recommends to modify the owb.conf file. I've also read your answer to an older post where you were mentioning the change of the -mX128M values in the ccastart file. Which one has an impact on performance when executing a CT mapping?
For code template mappings the heap that's important is the one in the CCA, this is started from the ccastart scripts, so its those that need increased. It is from the CCA that connections to sources/targets may be made and data pushed through for those code templates/KMs that do that.
There is a post here that covers some of the CCA architecture;