Hi All,
I have an application where I have 15 Dbms scheduler jobs running to process 500K transactions per Hour. All will be processing a specific set of a transaction so Locking is not the issue. I had a sequence with a cache size 2000 and incremented by 1.
Now I have a limit that I can have only 14 Million transactions processed in a day. ( 5 Million TYPE1, 2 MIllion Type2, 3 Million type3 etc)
I have 15 sessions running to process these transactions. I am bulk fetching transactions with a limit of 1000 of random types(say 200 transactions of Type1, 500 transactions of Type2 300 transactions of Type3). Different types of transactions will use different sequences e.g Type 1 will use TYPE1 sequence, Type 2 transaction will use TYPE2 sequence and so on.
But what I see is each time I try to execute the query to get cache it jumps by 2000. So I have gaps in my sequence and I am consuming my 14 million marks without even processing 800K transactions?
Is it because :
1) Multithreading as 15 different sessions are trying to process transactions (different sequence type) and each session will keep a cache of 2000.
2) High Cache size
How can I avoid this issue?
Regards
VInod