This discussion is archived
7 Replies Latest reply: May 19, 2011 6:07 AM by 819229 RSS

UPDATE DML performance on EHCC compressed data

762405 Newbie
Currently Being Moderated
Does anyone have experience on Update DML performance on EHCC compressed data?

Thanks!
  • 1. Re: UPDATE DML performance on EHCC compressed data
    Brad_Peek Newbie
    Currently Being Moderated
    We are in the middle of a POC with an Exadata quarter rack using SATA drives. Our testing of compression has been mixed, but it is actually the OLTP compression that we have been dissapointed with since the marketing material suggests that the performance penalty for updates when using OLTP compression is minimal.

    We found that if you have a process that updates all the rows in a set of contiguous blocks then you can see up to 10X slower performance of the updates. Especially if the update process is a recurring one in which those same rows are updated each time the process runs. Our initial intent was to compress everything using OLTP compression except for certain large partitioned tables, in which case we would use either QUERY or ARCHIVE compression on the older read-only partitions. We had to back off of that approach due to the update performance penalty. Our theory is that row-migration is playing a role in the slower performance, since we noticed a large number of chained rows after the updates.

    There is a least one bug filed regarding the OLTP compression performace of updates. I don't think a patch is available as of this posting but you would need to confirm.

    Bug 9049164 - SLOW UPDATE WITH ADVANCED COMPRESSION AND A VARCHAR2 COLUMN.
    PROBLEM:
    There is a table with a varchar2 column, and if we run a update against this column where the table has the option COMPRESS for all operations, then the performance is extremely slow.
    This is not the case if the column is the number field.

    DIAGNOSTIC ANALYSIS:

    SQL> UPDATE fin_coll_data_100m_nocompress
    set record_update_by=record_update_by||'S'
    where rownum < 1000000;
    999999 rows updated.
    Elapsed: 00:01:21.36

    SQL> alter system flush buffer_cache;
    System altered.
    Elapsed: 00:00:01.62

    SQL> UPDATE fin_coll_data_100m_comp_FAO
    set record_update_by=record_update_by||'S'
    where rownum < 1000000;
    999999 rows updated.
    Elapsed: 06:58:41.06
    RESTRICTIONS ASSOCIATED WITH ADVANCED COMPRESSION
    1)     Compressed tables can only have columns added or dropped if the COMPRESS FOR ALL OPERATIONS option was used.
    2)     Compressed tables must not have more than 255 columns.
    3)     Compression is not applied to LOB segments.
    4)     Table compression is only valid for heap organized tables, not index organized tables.
    5)     The compression clause cannot be applied to hash or hash-list partitions. Instead, they must inherit their compression settings from the tablespace, table or partition settings.
    6)     Table compression cannot be specified for external or clustered tables.
  • 2. Re: UPDATE DML performance on EHCC compressed data
    Brad_Peek Newbie
    Currently Being Moderated
    I just realized that I pretty much ignored your actual question about EHCC compressed data. We've done a number of tests but haven't organized the results in a manner suitable for posting. Once that is done I will try and post some benchmarks of DML against EHCC compressed data.
  • 3. Re: UPDATE DML performance on EHCC compressed data
    601262 Explorer
    Currently Being Moderated
    Doing bulk updates on compressed data (of any kind) will likely be slow as that type of operation is not optimal for compression, so "testing" it is really unnecessary - it simply isn't a match between the operation and the technology.

    If the workload requires bulk updates, then it is best to not use compression of any type.

    Testing it is much akin to "testing" removing every row from a table (bulk delete) via DELETE versus TRUNCATE. One operation is much faster, and for a good reason.

    --
    Regards,
    Greg Rahn
    http://structureddata.org
  • 4. Re: UPDATE DML performance on EHCC compressed data
    575729 Newbie
    Currently Being Moderated
    I agree with Greg Rahn.

    also even oracle database without exadata the update is slow the reason is, the amount of work oracle has to do. it has to create rollback data and redo data both. The with compression things become really difficult since compression is mostly supported for read only queries.
  • 5. Re: UPDATE DML performance on EHCC compressed data
    user5431290 Newbie
    Currently Being Moderated
    What we found here on our Exadata warehouse setup, is that as much as possible turn everything into direct path inserts and combine that with partition exchange to arrive at either deletes or updates...
  • 6. Re: UPDATE DML performance on EHCC compressed data
    UweHesse Expert
    Currently Being Moderated
    You may consider the requirement to do UPDATE frequently against HCC compressed tables a design mistake. You won't find many experiences from production systems (hopefully) about that therefore.
    Just don't do it :-)

    Kind regards
    Uwe Hesse

    http://uhesse.wordpress.com
  • 7. Re: UPDATE DML performance on EHCC compressed data
    819229 Newbie
    Currently Being Moderated
    Update on non-compressed information is typically a heavy operation, but in HCC data the update has a different algorithm.
    It will delete the entry in the Compression Unit (CU), and insert it the table as non-compressed data. So you have a new row.
    If you do it to the entire table it will uncompress the entire table. But then if you move (rebuild) the table, these rows will be brought together to form CUs, thus compressing them.

    Being aware of this, use it wisely.

    Hope it helped.

    LMC

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points