our 4 node 11.2.0 RAC run into trouble when we do a SAN virtualisation failover.
The failover finished just in time (40 seconds), multipath reports that all paths are up and storage is accessible but after a while
2013-03-06 17:45:46.243: [ CSSD]clssnmvDiskCheck: (ORCL:RAC_OCR_VOTE_01) No I/O completed after 90% maximum time, 200000 ms, will be considered unusable in 19170 ms
2013-03-06 17:45:46.243: [ CSSD]clssnmvDiskCheck: (ORCL:RAC_OCR_VOTE_02) No I/O completed after 90% maximum time, 200000 ms, will be considered unusable in 19500 ms
2013-03-06 17:45:47.246: [ CSSD]clssnmvDiskCheck: (ORCL:RAC_OCR_VOTE_03) No I/O completed after 90% maximum time, 200000 ms, will be considered unusable in 19060 ms
After 200000 ms all 4 nodes reboot themselve.
Voting disks are in ASM.
I cant understand why OCR is not able to write into Voting Disks when they are available
We also have two 2-Node RAC and they don't have those problems?
We are using Falconstor NSS for SAN/Storage virtualisation because of our two datacenters.
Data is stored in both datacenters synchronous.
During a failover (LUNs and servers move to the remote, second virtualisation appliance) the presented discs are not accessible for a couple of seconds.
Multipathing and HBA parameters are configured that the server only recognizes the disc as absent after a vaulue of 60 seconds.
In our case the discs are accessible and available after 30 seconds. Multipath shows them correct and active/ready but something in oracle is not happy with that.
After 200 Seconds all nodes fence each other.
You would need to check with Oracle Support, but, I do not know that such a virtualization methodology is supported. I can guarantee that if RAC looses connectivity at all, it will crash - as you have seen. You might need to look at ASM FAILURE GROUPS and have ASM manage the "failover" because Oracle writes the ocr/voting files at specific offsets on specific disk devices in the ASM disk [failure] group[s]. If it cannot find those files where they expect them, it will crash.