Lets not go with the stats and comparison diagrams shown or presented on the slides by the marketing teams of any production/feature. I request who benefited from the HCC deployment to share their real experiences and major technical hurdles they faced before/after using the HCC feature on Exadata, specially for OLTP systems. Share your success stories which would help others to make/take any decision about it.
Thanks for starting the conversation -- I would add in the invitation for anybody to share their NON-success stories about Hybrid Columnar Compression as well. More specifically, those situations where people had found HCC didn't help them out. I find this equally as important, and something the whitepapers or sales engineers never tell you.
Our use-case was when we had stored quite a few LOB columns (ie. PDFs or XML's) in a table, and a lot of other applications are going to have this requirement/limitation. We found that HCC didn't particularly save us that much in our testing. We will be pushing forward with an exadata implementation (within a superCluster) though, since it helps us in other areas.
Apologies if this is bad etiquette (ie. to hijack the thread) -- carry on with the success otherwise.
HCC will have two choices:
1. Warehouse Compression
This option is for optimizing Query performance.
Suitable for data warehouse applications.
Two options : Query High and Query Low
2. Online Archival Compression
This option is for optimized for maximum compression ratios.
Suitable for data changes very rarely.
Two options: Archive High and Archive Low
In our environment we have used HCC compression 24 GB table, it has compressed to 1.8 GB. But in OLTP environment we are facing performance issues because of HCC. So HCC will suitable for warehouse environments and OLD data you can compress and keep in the same database instead of moving the data out of storage. Whenever you want to access the data you can access from HCC compressed tables
Hope it helps...
Basically this post is about HCC performance on OLTP environment, however we have Exadata on Data Warehouse environment and I would like to share my experience with you all.
We have huge sized tables (>1 TB), which is partitioned by month. After checking data used frequency, we decided to apply EHCC on one year old partitions.
We found that for some of the tables the compression ratio was extreme good, 150 GB partition was compressed to 18 GB in Query High compression mode.
In Query Low compression we achieved around 4-6x compression, in Query High 8-10x and in Archive High it was around 14-15x.
Performance wise, there was not significant improvement/degradation in query performance. Since we applied compression on older partitions which are not updated so no performance check was done on DML statements.
The purpose behind this post is not only to measure the overall storage savings, also, the overall performance gain and whether the feature is handy on a highly OLTP systems too.
Thank you all for sharing your inputs/experiences. Will updating this thread with more informative information on HCC journey.