This question relates to STANDBY databases in a RAC environment and what happens on a reboot of a node.
We have a STANDBY Two-Node RAC Cluster that houses 5 databases. Each of these databases are STANDBY and running in MOUNT mode.
Upon reboot of a node in the cluster, CRS tried to start the instances they did not come up.
It appears that the instances on the restarted node did not come up because the database was already started in MOUNT mode.
The CRS does not know to start the instance in MOUNT mode, so it tries to start in normal read write mode causing an underlying error and instance never starts.
If we issue a srvctl start on the instance without the "-o mount" we can recreate the errror and the instance won't start. Adding the "-o mount" brings the instance up fine.
How do sites handle this?
Is it possible to add a parameter in CRS which will start the instance in the same mode as the database? If we embed an open mode and a role in the crs for each database, what happens when the STANDBY becomes PRIMARY?
We are an 11.1 version of our database. Linus is OEL 5.6.
So, looking at the portion of the link that refers to 'Add Standby database and Instances to the OCR' - If we use SRVCTL to give the STANDBY the role of ‘physical_standby’ and the start option of ‘mount’, what effect will that have if the STANDBY becomes our PRIMARY?
Would these database settings need to be modified manually with SRVCTL each time?
We understand why the instance is not starting when the node is rebooted, we are looking for a best practice of how this is implemented.
In further testing we have found that the SRVCTL 'role' can be set and is manipulated by the DG Broker during a switchover activity. However the SRVCTL 'start option' vsn be set but is not changed in switchover and we will have to remember to modify this in the event we switchover and plan to remain in that configuration for any length of time.