We are planning to implement compression for our app tables on exadata.
What tables should we compress?. Having size> 2GB or having 10 mill rows or what? Also since the app does not use bulk loading and there were updates to app tables and it will be cumbersome it was decided to use oltp compression. Any drawbacks to using oltp compression(with dml activity). Also should we shrink the objects(tables and indexes) before we implement compression?
How about something like this:
- Partition the large tables based on date loaded
- If the majority of your processing involves recently-loaded data, leave the recent data with OLTP compression (or none at all)
- Once the data becomes older, recompress the old data using ALTER TABLE MOVE PARTITION, into an appropriate HCC compression level
If you have B-tree indexes with repeated values, consider key compression there as well.
Thanks for your quick reply Marc
1. Our app does not access based on date loaded.
2. I was thinking of implementing oltp compression for all tables > 2GB in size.
3. Some of the indexes are bloated. So my question is whether I should use
alter index "TEST"."PR_QUEUE_FK01" modify partition "SYS_P1168" shrink space
4. for tables use
alter table "JH"."QUAL_HIST" movey partition "SYS_P9750" compress for oltp
5. So do steps 3 and 4. Then after shrinking, implement the OLTP compression
Is that a viable option?
Even if your app doesn't access data based on date loaded, it doesn't preclude from partitioning based on that, to allow better compression.
Regarding the indexes, once you move the table partition you'll need to rebuild the indexes anyways, so shrinking should be a moot point.
Hope this helps!
Please let us know on which environment you are working.
Is this Exadata?
If yes, then which version V1,V2,X2 or X8, full/half/quarter?
What is the current capacity, free space and used space?
Applying EHCC on 2 GB size of table, why?
The env is x2-2, exadata half rack. The db size = 4TB. free space = 400GB. Discussion is about applying OLTP compression, since app is not doing bulk loading into tables and there are frequent updates to the tables.
The DML pattern you describe (no direct loads and frequent updates) is the one OLTP compression was invented for. Nevertheless you may want to test that with your workload before taking it into production, for example with Database Replay.
To your last question:
Also should we shrink the objects(tables and indexes) before we implement compression?
If you only say alter table t compress for oltp , this will not compress the existing data. You have to say alter table t move compress for oltp instead, which does not require a shrink before.
"Don't believe it, test it!"
Every extra step always have drawbacks.
Even OLTP compression might decrease the update performance.
So if you don´t need, don´t use.
HCC Compression = historical data with direct insert and without ANY update (if you update, you will end up having mixed HCC and OLTP compression, and this will impact in offload queries).
OLTP Compression = Live data with few or none update.