This content has been marked as final. Show 5 replies
To fully answer your question I'd need to know more about the way you're loading data, partition layout, and expected update/dete volumes.
A few general notes though:
You'd need to rebuild the affected table or partition using ALTER TABLE MOVE to ensure everything is compressed with columnar compression. You can determine the compression level of a block with a block dump, but dumping every block in a table would be a fairly expensive process. I'd suggest using indirect methods, such as DBA_TAB_MODIFICATIONS, an update trigger that tracks update counts, or monitoring compression ratios over time. To get the highest possible compression ratios, you may also want to consider re-loading the table/partition with sorted data.
Thank you Marc.
Our table is partitioned based on a specific DATE column.
Active partitions, past 3 months, are OLTP compressed. All partitions older than 3 months are HCC-QH.
Does it make sense to update this DATE column (paritione key) of history record when any other column of the row gets updated. If this works, I can add a trigger to update the partition key column which will move the recrod to OLTP partition (current partition).
We do have a maintenance procedure to move current partitions older than 3 months from OLTP to HCC-QH (by alter move)
If your updates touch a very small percentage of the rows, it may not be worth it, otherwise, you have 2 options:
1- replace your updates with an INSERT AS SELECT, putting rows into an HCC compressed table that you later exchange with the partition
2- do the update then a move.
Option 1 is surely faster than 2.
I can see another option,
You may split your table in 2 tables
Table 1, with most of the columns,
Table 2, with the columns that you want to update.
Compress table 1 with HCC
Compress table 2 with OLTP
Of course it will depend on how many columns you will want to update (if you want to update most of the columns , my advise would be worthless)
I would advise that you refer to the following link by Kerry Osborne to find you what level of compression a particular row has been compressed with.
As far as your problem goes the question is that how many rows in your partition are getting updated. If the number of rows are very few then it does not matter since only those rows will change their compression type from EHCC to OLTP. In case the number of rows are huge then you may want to consider the option of compressing the partition again using EHCC. As of now I am not aware of any option that would just compress a few rows in the partition and I think you would have to compress the entire partition again using the ALTER MOVE statement.
To resolve your problem I would suggest the following options.
Option 1: In case you have footprint columns in your table (updated time, insert time) etc you can use the update time column to check which of the rows have been updated and then again compress those partitions in your weekly maintenance job.
Option 2: Setup a trigger on your table on update and for every update you can just insert a row in another table. You can further refine this trigger by just adding one row for each partition since you already know your partition boundaries (example if there is already a row for 11/23/2012 you don't want to insert a row for 11/24/2012 since you already know that you have to compress this partition).
Triggers are expensive so I would advise that if the updates are not hurting you much you can very well not do anything about it. You first need to calculate the size of your partition before the update (after compression is done) and then after the update has been performed. This way you will be sure to know whether you need to weekly job of not.