I am using Oracle 11g R2 and I am preparing some performance tests for our application. This application will load a lot of data (around 10 millions of rows per day) in a couple of Oracle tables with CLOB data types and varchar data types.
I would like to know if you have any advices on setting up the tables in order to increase the performance of the inserts into theses tables (use of ASM ? changing table parameters like initrans, etc.)
Many thanks !
I'm not sure if I can be any more specific than this, but ok, I'll try.
It's a bad habit to start any database activity by tweaking the parameters. Of course, there are exceptions to this rule, but generally you either don't need to touch parameters at all, or you modify them based on evidence from performance tests. So here is what I would do:
- take requirements
- take a test environment that has enough similarity to production
- build a simiple test -- X sessions doing Y inserts per second concurrently, where X and Y will be derived from your requirements
- see if insert time is within the target goal
- if not, look at response time profile -- where does the process spend more time? how can this time be reduced. in particular, if you see contention-related waits, then you may need to tweak initrans or some other parameters
If you do it this way, you'll be able to understand and trust your results. If you just collect a bunch of opinions from guys you never met and who never saw your system, who knows what you'd get.
I have following points in my notes for "fast" inserts :
1. Mark indexes unusable
2. Disable constraints
3. Stop using PLSQL Loop
4. Load data as single insert..append..parallel..select..parallel
5. Identify if you can partition the table
6. Rebuild indexes with parallel and nologging
7. Set Nologging on target table
8. Direct path insert
9. Enable Parallel DML
10. A good example by Mark @ https://forums.oracle.com/thread/1078374?start=15&tstart=0
mostly good points (I'd add one about disabling triggers), just a couple of comments:
1) you don't need to rebuild indexes to make them PARALLEL, ALTER INDEX ... PARALLEL doesn't involve any actual rebuild
2) careful with NOLOGGING -- be sure you can afford to lose the data
3) making indexes UNUSABLE doesn't stop Oracle from updating them -- you'll need to drop/rebuild which is not an option for continuous inserts
(as Ithink is the case here)
4) Partitioning is not very helpful for INSERT performance (if helpful at all) -- in some rare cases hash-partitioning can help reduce contention, but
generally it won't help INSERTs at all why possibly hurting performance of other things (e.g. queries involving local indexes in absence of partition pruning).
I would say that with the exception of pure DW systems, partitioning should only be done for reasons other than performance (i.e. data manageability).
Actually above points I got from this forum's expert replies at different threads. Couples of points from Hemant's reply @ https://forums.oracle.com/thread/1078374?start=15&tstart=0
But, I am confused are there any chances for data lose if my table is having nologging enabled, and Will oracle still update the index if index is unusable please?
>Actually this is an OLTP application that insert the data so we cannot disable any indexes or change the SQL code.
I do understand, if you see that inserts are going to be very slow then you can take an SQL trace 10046 level 8 and then analyse with tkprof to see where time is going to spend.
Actually above points I got from this forum's expert replies at different threads.
There are some people who claim to be Oracle experts, but they really aren't. And some of them post on forums.
And there are quite a few of them who are very successful in convincing people around them that they are experts -- but they stil aren't.
Some of them even publish books and give TV interviews -- that doesn't make what they say any less false.
Don't trust names and titles blindly.