7 Replies Latest reply: Jun 14, 2014 10:35 PM by Marc Fielding RSS

    HCC Compression  not working

    sashi415598

      Hello Experts,

       

      How to determine the minimum no of rows rows involved to trigger HCC Compression . The Single row direct load inserts are no getting compressed even though the tables is enabled with HCC Compression.

       

      Please let know if there is way to determine , the minimum rows for insert to trigger HCC compressed rows.

        • 1. Re: HCC Compression  not working
          czipper

          How do you know that HCC compression isn't working? What statistics / metadata are you looking at which show it isn't working?

          • 2. Re: HCC Compression  not working
            sashi415598

            Hi ,

            Below is the example , I have QUERY HIGH COMPRESSION set on the tables . Im Checking the individual row for Compression using 'dbms_compression.Get_compression_type'.

            If i perform the same insert on the same with more than 50 rows , All the 50 rows are compressed.

             

            OWNER                          TABLE_NAME                     COMPRESS COMPRESS_FOR
            ------------------------------ ------------------------------ -------- ------------
            DATPRD                         HCC_GG_TEST                    ENABLED  QUERY HIGH

            INSERT /*+ append_values */ INTO DATPRD.HCC_GG_TEST (USER#,
                                            NAME,
                                            TYPE#,
                                            PASSWORD,
                                            DATATS#,
                                            TEMPTS#,
                                            CTIME,
                                            PTIME,
                                            EXPTIME,
                                            LTIME,
                                            RESOURCE$,
                                            AUDIT$,
                                            DEFROLE,
                                            DEFGRP#,
                                            DEFGRP_SEQ#,
                                            ASTATUS,
                                            LCOUNT,
                                            DEFSCHCLASS,
                                            EXT_USERNAME,
                                            SPARE1,
                                            SPARE2,
                                            SPARE3,
                                            SPARE4,
                                            SPARE5,
                                            SPARE6)
                 VALUES (
                           2384430164.9251,
                           'meOUIItzfgVmRuOqtwpBKnSzpmSkUe',
                           5965498151.01388,
                           'BprFgmVoNPOkfTbVDTKKOnmLbRhEdn',
                           4800871913.6626,
                           290474614.761038,
                           TO_DATE ('06/19/2015 19:10:00', 'MM/DD/YYYY HH24:MI:SS'),
                           TO_DATE ('03/30/2015 08:18:50', 'MM/DD/YYYY HH24:MI:SS'),
                           TO_DATE ('01/14/2015 15:22:27', 'MM/DD/YYYY HH24:MI:SS'),
                           TO_DATE ('06/08/2014 23:42:50', 'MM/DD/YYYY HH24:MI:SS'),
                           1620845036.66995,
                           'KageOoCIwcyHRqdHYKKaupimoEjjdZLekrsCaO',
                           7068569538.25592,
                           2768176254.69721,
                           1885373625.6735,
                           634976331.500214,
                           7162839632.37634,
                           'EPfbwldeJrmvzeQNsNOmhTrJrFOViN',
                           'vfJUJiahNsVLGdPHGuodXvduvwVtxezZJT',
                           2685651820.43253,
                           9689374832.02134,
                           'hCQkxdxLqwsfxpncoPIzpULGjBXqtbMkDqql',
                           'DuOfXGYRAZnYARDgVcIMIvqLpHgHrpHrrwZzjH',
                           'YMYFBsVxAWSZGcTtpgjjHxqkFGzYxkqOOHzzjaiKv',
                           TO_TIMESTAMP ('09/21/2016 9:35:34.000000 AM',
                                         'MM/DD/YYYY fmHH12fm:MI:SS.FF AM'));

            1 row created.

            SQL> commit;

            Commit complete.

            SQL>


            OWNER                          TABLE_NAME                     COMPRESS COMPRESS_FOR
            ------------------------------ ------------------------------ -------- ------------
            DATPRD                         HCC_GG_TEST                    ENABLED  QUERY HIGH

            SQL> col Compression_type format a50
            SQL> SELECT CASE comp_type
              2           WHEN 1 THEN 'No Compression'
              3           WHEN 2 THEN 'Advanced compression level'
              4           WHEN 4 THEN 'Hybrid Columnar Compression for Query High'
              5           WHEN 8 THEN 'Hybrid Columnar Compression for Query Low'
              6           WHEN 16 THEN 'Hybrid Columnar Compression for Archive High'
              7           WHEN 32 THEN 'Hybrid Columnar Compression for Archive Low'
              8           WHEN 64 THEN 'Compressed row'
              9           WHEN 128 THEN 'High compression level for LOB operations'
            10           WHEN 256 THEN 'Medium compression level for LOB operations'
            11           WHEN 512 THEN 'Low compression level for LOB operations'
            12           WHEN 1000 THEN 'Minimum required number of LOBs in the object for which LOB compression ratio is to be estimated'
            13           WHEN 4096 THEN 'Basic compression level'
            14           WHEN 5000 THEN 'Maximum number of LOBs used to compute the LOB compression ratio'
            15           WHEN 1000000 THEN 'Minimum required number of rows in the object for which HCC ratio is to be estimated'
            16           WHEN -1 THEN 'To indicate the use of all the rows in the object to estimate HCC ratio'
            17           WHEN 1 THEN 'Identifies the object whose compression ratio is estimated as of type table'
            18           ELSE 'Unknown Compression Type'
            19         END AS Compression_type,
            20         n as num_rows
            21  FROM   (SELECT comp_type,
            22                 Count(*) n
            23          FROM   (SELECT
            24                 dbms_compression.Get_compression_type('DATPRD', 'HCC_GG_TEST', ROWID)
            25                         AS comp_type
                            FROM DATPRD.HCC_GG_TEST)
            26   27          GROUP  BY comp_type);

            COMPRESSION_TYPE                                     NUM_ROWS
            -------------------------------------------------- ----------
            No Compression                                             1


            • 3. Re: HCC Compression  not working
              Marc Fielding

              Sashi:

               

              You won't get HCC compression if you're only inserting a single row at a time.  The /*+INSERT_VALUES*/ hint was made for bulk inserts operating on large numbers of rows.

               

              Kerry Osborne has a great blog post with sample code here: Kerry Osborne’s Oracle Blog » Blog Archive » APPEND_VALUES and SYS_DL_CURSOR Hints with…

               

              HTH!


              Marc

              • 4. Re: HCC Compression  not working
                sashi415598

                Thanks.

                I'm working on specific requirement where we are using goldengate to replicat data into HCC compressed tables . We are ensuring that these tables would only have insert's as they are intended to hold historic data .

                But Being a replicated txn , we cannot ensure that there will always be bulk inserts taking place .

                is there way or formula which controls the HCC compression .

                How do we know  minimum no of rows inserted for  the HCC compression to  be trigger for a particular table.

                 

                Any info on the above would be helpful.

                 

                Regards,

                Sasidhar.P

                • 5. Re: HCC Compression  not working
                  sagargole

                  Are these HCC tables partitioned? If you do not have much control on the insert statements you can schedule batch jobs which can compress the partitions using the ALTER TABLE table_name MOVE PARTITION partition_name COMPRESS FOR QUERY HIGH command.

                   

                  Since these are historic tables you can execute the compression script for only the last few partitions.

                  • 6. Re: HCC Compression  not working
                    sashi415598

                    Hello,

                     

                    We Still having issues with single row direct path inserts not getting compressed. Is there way we can get HCC compression working ,rather than manually doing compressing.

                     

                    I don't see any metalink note saying The Single row direct load inserts will not get compressed either. Is this a bug.

                     

                     

                    Thanks,

                    Sasidhar.P

                    • 7. Re: HCC Compression  not working
                      Marc Fielding

                      Hello Sasidhar,

                       

                       

                       

                      Single-row inserts, whether direct path or not, can not be HCC-compressed.  Since the data is stored in columns instead of rows, adding a single row would require adding values to each of the columns, effectively rebuilding the entire compression unit, or perhaps even more if it overflows.  And putting a single row into a new compression unit would defeat the whole point of compression.  My previous post in this thread has more references.

                       

                       

                       

                      Your options include:

                      - Using OLTP compression, which may provide sufficient compression ratios for your application

                      - Partitioning by an incrementing key, insertion date, or similar surrogate, and rebuilding partitions with HCC once they're no longer getting inserted into

                      - Inserting into a staging table (perhaps even a global temporary table), and doing a bulk insert into the "real" table once a large volume of data has been accumulated

                      - If data is coming from a non-Oracle source, you may consider staging it in a text file and loading in bulk with SQL*Loader

                       

                       

                       

                      Marc