Can anyone suggest a way to enable EHCC on a schema(basically the 5 biggest schemas on a 2TB) database. MOS Note will be appreciated if one knows of one. DB was migrated to Exa DBM with normal data pump.
When the tables have been imported into the target DB already, there are not many options left:
1) alter table t move compress for query high (for example)
Use 1) when the tables can be moved while no end users access them, 2) else
2) has no downtime, especially. See here for an example:
Instead of partitioning, you may choose any modification you need to do, including HCC.
The whole process is not very Exadata-specific.
"Don't believe it, test it!"
Technically, compression settings are made at the segment level, not table level. That is, you can compress certain subpartitions or partitions with different settings than the other subpartitions or partitions as desired.
EHCC can be applied on tables and or partitions. Firstly you need to ensure that the data in the table or partition which you are going compress using EHCC is not going to be updated. If that is the case then the level of compression for the rows being updated is going to change to OLTP compression from Warehouse compression. Secondly if you are going to add any new rows to the table or partition (which is going to be compressed) then the inserts should be direct path load inserts, else the newly added rows are not going to be compressed.
EHCC is recommended for warehouse environments where you typically do not update the data and data is mostly inserted using direct path loads. In an OLTP system EHCC can be used as a substitute to traditional ILM (Information Lifecycle Management) policies where older partitions are compressed using EHCC compression and the most recent partitions are not compressed at all.
The DBMS_COMPRESSION package provides an interface to facilitate choosing the correct compression level for an application. The following example will give you the estimated compression ratios for an existing table or partition and these ratios are fairly close to the real compression which will be achieved.
scratchtbsname => '<schema_name_for_scratch_table>',
ownname => '<owner_name>',
tabname => '<table_name>',
partname => '<partition_name>',
comptype => DBMS_COMPRESSION.COMP_FOR_QUERY_HIGH,
blkcnt_cmp => l_blkcnt_cmp,
blkcnt_uncmp => l_blkcnt_uncmp,
row_cmp => l_row_perblk_cmp,
row_uncmp => l_row_perblk_uncmp,
cmp_ratio => l_cmp_ratio,
comptype_str => l_comptype_str);
dbms_output.put_line(' ') ;
dbms_output.put_line('Estimated Compression Ratio using '|| l_comptype_str||': '||round(l_cmp_ratio,3)) ;
Compression the actual table or partition is very simple. The following command will compress a partition.
ALTER TABLE <table_name> MOVE PARTITION <partition_name> COMPRESS FOR QUERY HIGH PARALLEL 8 UPDATE INDEXES;
Please note that if you have any global indexes on the table they will get unusable. Hence the update indexes clause. If you are compression more than one partition then it is advisable to rebuild the global indexes only after compressing all the partitions.
The level of compression achieved defers from application to application depending upon the type of data it contains. However, this is one of the most useful features of EXADATA and is applicable to both OLTP and Warehouse applications.