11g OLTP Compression
Hi
We are evaluating 11g R2 advanced OLTP compression for one of our 24 x 7 OLTP database.
The table is quite large and has appx 4.6 billion rows and about 140 GB in size. The primary key is based on sequence
The table undergoes, inserts,updates and selects
Would this be a good candidate for OLTP compression? Since the table undergoes updates and inserts as well (besides selects), do we need to worry about the overhead for DML operations? Is there any test results out there where this features has been tested for DML operations? How much of an overhead is involved for such operations? Can we expect slowdowns in performance because of this?
0