Test it out in a non-Production environment and see.
It depends on a) your compression type, b) how large your compression units are and c) how much cardinality there is
For the oldest 33% of the partitions, we've applied 'archive high'. The compression rate varies between 76% and 92%
For the middle 33%, we've applied 'query low'. Not sure of compression rate as it's varying a bit much. But I'd say around the 60-70% range
For the newest 33%, we've applied no compression.
For a 4.5Tb partitioned table, we've managed to shrink it down to about 2.5Tb. Not bad, especially how conservative we went with the compression types.
If I understand correctly, the strategy you laid out for Compression is you have used different level of compression for partitions belonging to the same table?.If that's the case I would like to know how was the performance of the queries accessing the entire table.
That is correct. Basically, the newest partitions (most likely to be queried) were given the least compression (none) whereas the oldest partitions (unlikely* to be queried as the table contains 7 years' of data) were given the most compression (archive high).
As long as you keep your global stats up to date, it really seems transparent to the end user. I have not listened to a single complaint since this table was HCC'd :)
In all seriousness, end users haven't commented. I wouldn't be surprised to hear that some queries might run faster with HCC, some did in testing. It certainly hasn't caused any problems - it's a rarity: a new technology that worked exactly as it promised on its first try!
It is definitely worth implementing - just make sure the 'range' of data you use for a compression unit is large enough for you to see significant compression.
* without becoming a victim of DBARage: "exactly WHY do you need to get 7 years of data - and HOW are you going to use 4.5Tb of information, precisely? Someone might find they're not able to login to the EDW for more than 2 minutes at one time tomorrow...,etc "