if you use a non-exadata machine for DR, all your tables that are in HCC format on Exadata must be uncompressed before being accessible. You do that via Alter table/partition move uncompress or 'compress basic'.
Beware of the space implications, as well as the time it might take to do this.
This is correct. You can create a physical standby of an Exadata database using HCC to a non-Exadata platform. After switching over to the standby, you will have to uncompress any objects using HCC before selecting from them (unless you use ZFS storage appliance as your storage). This will not only require extra storage, but you will need to be sure that you have plenty of CPUs on the DR system. As you can expect, decompression is a very cpu-intensive task.
We do not recommend using a non-Exadata system for disaster recovery, for the following reasons:
<li>Performance differences between primary and standby systems
<li>Increased requirements (CPU and storage) for standby systems
<li>Higher window to switch to standby system (additional time needed to decompress HCC objects)
<li>Lack of additional environment to test patching (standby first patch apply)
This MAA whitepaper should answer all your questions and offer suggestions on the best practices in such situations:
One thing that the very good whitepaper that you linked to does not mention (because it wasn't the case in March 2011 yet) is that now ZFS and Pillar storage supports HCC compression. So the Non-Exadata Standby Machine can process the HCC compressed tables if it uses that kind of storage, probably.