1 person found this helpful
The most common way to configure failover between separate data centers would be to configure separate RAC clusters at each data center and to set up Oracle Data Guard replication between them. You can then use Oracle client features to fail over on the client side. There is an excellent white paper on the client side config at http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr2-client-failover-173305.pdf
Thanks Marc!! So we don't have an option to create a cluster between Exadata Appliances? And only way is to create two separate ORACLE RAC clusters?
Yes it's technically possible to create what's called an extended cluster. However to maintain the data consistency and availability that you would expect from a database like Oracle, there are significant engineering requirements to do so, requiring significant investment.
Consider the following additional factors when implementing an extended cluster architecture:
- Network, storage, and management costs increase.
- Write performance incurs the overhead of network latency. Test the workload performance to assess impact of the overhead.
- Because this is a single database without Oracle Data Guard, there is no protection from data corruption or data failures.
- The Oracle release, the operating system, and the clusterware used for an extended cluster all factor into the viability of extended clusters.
- When choosing to mirror data between sites:
- Host-based mirroring requires a clustered logical volume manager to allow active/active mirrors and thus a primary/primary site configuration. Oracle recommends using ASM as the clustered logical volume manager.
- Array-based mirroring allows active/passive mirrors and thus a primary/secondary configuration.
- Storage costs for this solution are very high, requiring a minimum of two full copies of the storage (one at each site).
- Extended clusters need additional destructive testing, covering
- Site failure
- Communication failure
- For full disaster recovery, complement the extended cluster with a remote Data Guard standby database, because this architecture:
- Maintains an independent physical replica of the primary database
- Protects against regional disasters
- Protects against data corruption and other potential failures
- Provides options for performing rolling database upgrades and patch set upgrades
Marc - Thanks for the info!!
If configuring Exadata Appliance cluster is technically feasible. Do we have something like "Exadata Virtual Node" or "Virtual IP" (something similar to SCAN IP), where the client software can point to that.
In my requirement, I am from the client software side and my customer is configuring Exadata Appliance cluster. Earlier they were using ORACLE RAC, my software worked properly using the SCAN IP. Now they going to configure Exadata Appliance Cluster , so how or which Super or Virtual Node IP , our client software can use, for this Exadata Appliance Cluster for the fail over to be transparent.
Think of it this way: if there were a single virtual IP, what datacenter would it reside in? What would happen if that datacenter failed? What would happen if the link between the datacenters would fail?
If you configure client-side failover as per the whitepaper, the client will connect to the SCAN address of one datacenter preferentially, and fail over to the other if for any reason they can't reach it.
(And as a side note, with Oracle 12c you do have another option: an entirely new, distributed naming and load balancing system called global data services)