How a Small transient table should be stored to get best performance under heavy insert/delete hit.
Hi Gurus,
we are having a small table almost 10k to 20k records with average row length of 200 characters. on an average this table is 50MB and index on this table is 4MB.
This table shows a lot of contention on index in our RAC environment having 3 nodes. Average hit on this table is almost 150k inserts and same number of deletes. with normal load of around 50k inserts + 50K deletes the average elapse time comes less than 20 ms and it reaches to 1500 ms when the same system is loaded little bit where number of queries triples.
I want to know what should be the best way to get a decent performance from this table. This table stores transient data and this data hardly resides for 2 to 5 seconds in this table. basically it stores lock information for few particular application operations, so that in distributed environment different application servers can choose different jobs and check these jobs are not getting executed by other application server.
0