This content has been marked as final. Show 5 replies
It occurs when the instance members in a RAC fail to ping/connect to each other via this private interconnect, but the servers are all pysically up and running and the database instance on each of these servers is also running. These individual nodes are running fine and can conceptually accept user connections and work independently. So basically due to lack of commincation the instance thinks that the other instance that it is not able to connect is down and it needs to do something about the situation. The problem is if we leave these instance running, the sane block might get read, updated in these individual instances and there would be data integrity issue, as the blocks changed in one instance, will not be locked and could be over-written by another instance. Oracle has efficiently implemented check for the split brain syndrome.
In RAC if any node becomes inactive, or if other nodes are unable to ping/connect to a node in the RAC, then the node which first detects that one of the node is not accessible, it will evict that node from the RAC group. e.g. there are 4 nodes in a rac instance, and node 3 becomes unavailable, and node 1 tries to connect to node 3 and finds it not responding, then node 1 will evict node 3 out of the RAC groups and will leave only Node1, Node2 & Node4 in the RAC group to continue functioning.
The split brain concepts can become more complicated in large RAC setups. For example there are 10 RAC nodes in a cluster. And say 4 nodes are not able to communicate with the other 6. So there are 2 groups formed in this 10 node RAC cluster ( one group of 4 nodes and other of 6 nodes). Now the nodes will quickly try to affirm their membership by locking controlfile, then the node that lock the controlfile will try to check the votes of the other nodes. The group with the most number of active nodes gets the preference and the others are evicted. Moreover, I have seen this node eviction issue with only 1 node getting evicted and the rest function fine, so I cannot really testify that if thats how it work by experience, but this is the theory behind it.
When we see that the node is evicted, usually oracle rac will reboot that node and try to do a cluster reconfiguration to include back the evicted node.
You will see oracle error: ORA-29740, when there is a node eviction in RAC. There are many reasons for a node eviction like heart beat not received by the controlfile, unable to communicate with the clusterware etc.
And also You can go through Metalink Note ID: 219361.1
Is there a defined rule that RAC clusters "should" be an odd number because of this scenario?
i.e. if you have a 4 node RAC cluster and 2 nodes simultaneously die, will this cause the other 2 nodes to evict each other / terminate?
Based on this rule, 50% of the RAC cluster (or one of the 2 groups) has lost contact with the other and therefore yeilds a similar scenario to a 2 node cluster.
no. The number of RAC nodes is unimportant. If you have a split brain (interconnect is lost) the nodes will still be able to communicate via. the voting disks (disk heartbeat). Via. this it is always possible to tell if a node has to leave the cluster (for this reason a node evicts if it cannot access the majority of the voting disks).
However there are 2 eviction szenarios. If you have a split brain in an unequal way, the bigger subcluster will survive.
In case of an equal split brain situation the subcluster containing the node which has the OCR master role at that moment will survive. Since if the cluster evicts the node with the OCR master role, this role would have to be additionally relocated - and it is easier to evict the other nodes not having that role.
So if in your szenario 2 nodes will simultanious die, this will not impact the running nodes, since they will detect these 2 nodes are not running anymore. This is one of the szenarios where both heartbeats (disks and interconnect) is needed to detect that.