This content has been marked as final. Show 7 replies
We are in the middle of a POC with an Exadata quarter rack using SATA drives. Our testing of compression has been mixed, but it is actually the OLTP compression that we have been dissapointed with since the marketing material suggests that the performance penalty for updates when using OLTP compression is minimal.
We found that if you have a process that updates all the rows in a set of contiguous blocks then you can see up to 10X slower performance of the updates. Especially if the update process is a recurring one in which those same rows are updated each time the process runs. Our initial intent was to compress everything using OLTP compression except for certain large partitioned tables, in which case we would use either QUERY or ARCHIVE compression on the older read-only partitions. We had to back off of that approach due to the update performance penalty. Our theory is that row-migration is playing a role in the slower performance, since we noticed a large number of chained rows after the updates.
There is a least one bug filed regarding the OLTP compression performace of updates. I don't think a patch is available as of this posting but you would need to confirm.
Bug 9049164 - SLOW UPDATE WITH ADVANCED COMPRESSION AND A VARCHAR2 COLUMN.
There is a table with a varchar2 column, and if we run a update against this column where the table has the option COMPRESS for all operations, then the performance is extremely slow.
This is not the case if the column is the number field.
SQL> UPDATE fin_coll_data_100m_nocompress
where rownum < 1000000;
999999 rows updated.
SQL> alter system flush buffer_cache;
SQL> UPDATE fin_coll_data_100m_comp_FAO
where rownum < 1000000;
999999 rows updated.
RESTRICTIONS ASSOCIATED WITH ADVANCED COMPRESSION
1) Compressed tables can only have columns added or dropped if the COMPRESS FOR ALL OPERATIONS option was used.
2) Compressed tables must not have more than 255 columns.
3) Compression is not applied to LOB segments.
4) Table compression is only valid for heap organized tables, not index organized tables.
5) The compression clause cannot be applied to hash or hash-list partitions. Instead, they must inherit their compression settings from the tablespace, table or partition settings.
6) Table compression cannot be specified for external or clustered tables.
I just realized that I pretty much ignored your actual question about EHCC compressed data. We've done a number of tests but haven't organized the results in a manner suitable for posting. Once that is done I will try and post some benchmarks of DML against EHCC compressed data.
Doing bulk updates on compressed data (of any kind) will likely be slow as that type of operation is not optimal for compression, so "testing" it is really unnecessary - it simply isn't a match between the operation and the technology.
If the workload requires bulk updates, then it is best to not use compression of any type.
Testing it is much akin to "testing" removing every row from a table (bulk delete) via DELETE versus TRUNCATE. One operation is much faster, and for a good reason.
I agree with Greg Rahn.
also even oracle database without exadata the update is slow the reason is, the amount of work oracle has to do. it has to create rollback data and redo data both. The with compression things become really difficult since compression is mostly supported for read only queries.
What we found here on our Exadata warehouse setup, is that as much as possible turn everything into direct path inserts and combine that with partition exchange to arrive at either deletes or updates...
You may consider the requirement to do UPDATE frequently against HCC compressed tables a design mistake. You won't find many experiences from production systems (hopefully) about that therefore.
Just don't do it :-)
Update on non-compressed information is typically a heavy operation, but in HCC data the update has a different algorithm.
It will delete the entry in the Compression Unit (CU), and insert it the table as non-compressed data. So you have a new row.
If you do it to the entire table it will uncompress the entire table. But then if you move (rebuild) the table, these rows will be brought together to form CUs, thus compressing them.
Being aware of this, use it wisely.
Hope it helped.