Yes there is a step commit transaction after the step of insert into target table. But my problem is all records are not inserted in the target table, target DB got down in the middle of processing (say on the 500th record). so in which table will the data after 500 th record be stored? and also after target DB comes up how will the data( from 501 th record) be fetched so that no data loss takes place..
Unless you are using a modified KM, ODI only commits transaction after it finishes the step.
So, if you have 1000 rows at the source, this is what will happens if the Target DB crashes in the middle of the process:
1) ODI loads 1000 records from source to a C$_ table (assuming it's an external source).
2) ODI loads 1000 records into I$_ table
3) When loading the target, the DB goes down. The transaction is not comitted. All 1000 rows are in I$ table, and ODI stops with an error.
You have 2 options here:
1) Run the process again. The process will start over, and do steps 1, 2 and 3 again.
2) Re-start the processo from the Operator. The processo will try to continue from step 3. (This is not always wanted, please be sure of your loading strategy).
Keep in mind that you're not using a row-per-row transaction tool, so there is no "continue from the 501th row".
Agree with LuizFilipe
This process can be automatic but for this you have to customize your KM accordingly which would be very complex.
so better you can start your interface execution from beginning.
Hi Luiz Filipe,
Thanks a lot for your reply.
But there is a query. Suppose I have 1 lakh data from source to write in Target DB and as you said it will first load in C$ and in I$ table, the performance will be affected right ?
So I want to commit it in every 200 th transaction, so that if target DB even gets down in 3010 th row, 3000 data is inserted in target table and remaining un-inserted data will be processed.
So can you pls kindly reply how it will be processed and from which table and will I have to start it manually??
Hope am clear with my question..
ODI's performance is pretty good, because in fact there is no "ODI Performance". The performance will be as good as your enviroment performance + a good load strategy. I've been working in several ODI projects with gigantic volumes, with no performance issues.
There is no such thing as "remaining un-inserted data". All data being inserted remains in the I$ table until the end of the process. ODI does not automatically control inserted/uniserted data, so, be carefull if you will implement a commit in your process. You will have to implement your process in order to flag the data that has been comitted, and where clauses to ignore them when the load is aborted/restarted.
Also you will have to use a row-by-row processing which will dramatically decrease your performance.
Please let me know if that helps!