Forum Stats

  • 3,727,805 Users
  • 2,245,470 Discussions
  • 7,853,046 Comments

Discussions

How to estimate uncompress row count or percentage of these table ?

thrupass
thrupass Member Posts: 70

env

----------

Oracle 12.1.0.2

Exadata

tab_basic : row store compress basic

tab_qh : compress for query high

* Data loaded by direct-path INSERT operations

There are two compressed tables.

Several DML happened to these tables....

How to estimate uncompress row count or percentage of these table ?

Thanks in advance

Hemant K Chitale

Answers

  • John Thorton
    John Thorton Member Posts: 14,493 Silver Crown
    edited May 2016

    query NUM_ROWS

  • Unknown
    edited May 2016
    There are two compressed tables.
    Several DML happened to these tables....
    How to estimate uncompress row count or percentage of these table ?
    

    I'm not aware of ANY tracking of compressed versus uncompressed blocks.

    As you suggest a normal DML will uncompress a compressed block in order to do the dml and that block will remain uncompressed.

  • thrupass
    thrupass Member Posts: 70
    edited May 2016

    Advanced compress option not available...

    Several DML happen to these tables ...

    so some rows were UNCOMPRESSed....

    I want to estimate uncompress row count

  • Unknown
    edited May 2016

    Basic compression does NOT compress rows. It 'sort of' compresses blocks.

    And, as I already said, that is NOT tracked at the block level that I am aware of.

  • John Thorton
    John Thorton Member Posts: 14,493 Silver Crown
    edited May 2016
    thrupass wrote:
     
    I want to estimate uncompress row count
    

    what will you do differently after you have some number?

  • thrupass
    thrupass Member Posts: 70
    edited May 2016

    I want to know when I have to recompress(for example move partition and compress) these tables....

  • Hemant K Chitale
    Hemant K Chitale Member Posts: 15,753 Blue Diamond
    edited May 2016

    >Advanced compress option not available...

    This table :

    >tab_qh : compress for query high

    does require the Advanced Compression licence option.

    >I want to know when I have to recompress(for example move partition and compress) these tables...

    ALTER TABLE .. MOVE PARTITION .. ROW STORE COMPRESS BASIC

    with Advanced Compression it is :

    COLUMN STORE COMPRESS FOR QUERY HIGH || LOW

    You got to be pretty sure that you DO want to compress the partition.

    Hemant K Chitale

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,579 Gold Crown
    edited May 2016

    For the "row store compess basic" table any row that has been updated after the initial direct path load will almost certainly ** end up as a migrated row so you could use:

    analyze table tab_basic list chained rows;  (after using $ORACLE_HOME/rdbms/admin/utlchain.sql to create the default chained row table) to get the rowids for the migrated rows and count them.

    The same migration assumption would be true for query high under exadata, but I've never tried listing chained rows in that case, so I don't know if it would even work - there might be some odd Oracle error message about invalid storage option.

    Regards

    Jonathan Lewis

    ** Update - I suppose if the pattern of your updates affected several rows in any one block you might find that most of the updated rows from a given block migrated and then left enough space for a couple of them to expand and stay in the same block.  Here's an article I wrote for allthingsoracle with some ideas about what can happen on updates on tables defined with basic compression: Compression in Oracle – Part 2: Read-Only Data – All Things Oracle

    Hemant K Chitale
  • thrupass
    thrupass Member Posts: 70
    edited May 2016

    On exadata

    COLUMN STORE COMPRESS FOR QUERY HIGH || LOW

    This command doesn't need a advanced compress option.

  • Hemant K Chitale
    Hemant K Chitale Member Posts: 15,753 Blue Diamond
    edited May 2016

    Oh. http://docs.oracle.com/database/121/DBLIC/editions.htm#DBLIC116

    Row Level Locking requires it

    Hybrid Columnar Compression Row-Level Locking

    (Note 1)

    N

    N

    Y

    Requires the Advanced Compression option, and Exadata, Supercluster, ZFS, or FS1 storage

    Note 1: This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

  • 3223386
    3223386 Member Posts: 154
    edited May 2016

    Hi ,

    Please check the 1284972.1 note id which give you the clear picture. Hope it helps. I understood the compression ratio and the uncompressed blocks using this note id.

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,579 Gold Crown
    edited May 2016

    How come you've got time to tell someone they're wrong, but you don't have time to acknowledge an answer that addresses your question ?

  • Hemant K Chitale
    Hemant K Chitale Member Posts: 15,753 Blue Diamond
    edited May 2016

    >For the "row store compess basic" table any row that has been updated after the initial direct path load will almost certainly ** end up as a migrated row

    Ah yes, Jonathan.

    This in 11.2 :

    SQL>drop table hkc_obj_list;

    Table dropped.

    SQL>create table hkc_obj_list compress basic as select * from user_objects where 1=2;

    Table created.

    SQL>select compression, compress_for, pct_free from user_tables where table_name = 'HKC_OBJ_LIST';

    COMPRESS COMPRESS_FOR  PCT_FREE

    -------- ------------ ----------

    ENABLED  BASIC                0

    1 row selected.

    SQL>insert /*+ APPEND */ into hkc_obj_list select * from user_objects;

    245620 rows created.

    SQL>commit;

    Commit complete.

    SQL>analyze table hkc_obj_list compute statistics;

    Table analyzed.

    SQL>select chain_cnt from user_tables where table_name = 'HKC_OBJ_LIST';

    CHAIN_CNT

    ----------

            0

    1 row selected.

    SQL>

    SQL>update hkc_obj_list set object_name=object_name where rownum < 10001;

    10000 rows updated.

    SQL>commit;

    Commit complete.

    SQL>analyze table hkc_obj_list compute statistics;

    Table analyzed.

    SQL>select chain_cnt from user_tables where table_name = 'HKC_OBJ_LIST';

    CHAIN_CNT

    ----------

          7460

    1 row selected.

    SQL>

    Hemant K Chitale

  • Jonathan Lewis
    Jonathan Lewis Member Posts: 9,579 Gold Crown
    edited May 2016

    Hemant,

    It looks like deleting 3 rows from a block leaves enough space for one row to expand.

    I wonder if you could get 100% migration if you did something like where mod(object_id,25) = 0;

    As another thought - if the OP had locked the table stats after the initial load then dba_tab_modification would probably show the number of updates on the table, and that might be a reasonable indicator of the number of migrated rows if we had some idea of how many times a typical row was updated.  Again, I don't know how true that might be when thinking of HCC.  Possibly there's also a threat that a row migration might be recorded as a delete and insert rather than an update (but I would be just a little bit surprised if that were the case).

    Regards

    Jonathan Lewis

  • Mark D Powell
    Mark D Powell Member Posts: 5,914 Blue Diamond
    edited May 2016

    Thrupass, based on Jonathan's and Hemant's posting it would appear that tables loaded via direct load and later updated are should not use basic compression.  The following note which warns against compressing tables subject to regular update probably needs to take note of the direct load issue:

    Advanced Table Compression (Doc ID 1060693.1)

    - -

    It would seem to me that you could calculate what the uncompressed expected table size should be then compare the current, compressed, allocation against the calculation.  When the compressed table passes some percentage of the expected uncompressed size then more investigation would be warranted and perhaps a table reorganization for space management purpose might be beneficial.  If you do rebuild any tables you should keep a record because if the reorganization needs to be redone within N time period then the table probably should not be compressed.  I leave N to what your shop considers reasonable: 3 months, 6 months, 1 year, etc,,,,

    - -

    HTH -- Mark D Powell --

This discussion has been closed.