- 3,722,125 Users
- 2,244,228 Discussions
- 7,849,650 Comments
In our environment, we have 188.8.131.52 cluster installed on 2 node (RHEL 7.x 64-bit). We have 5 databases (4 on 184.108.40.206 and 1 on 220.127.116.11) running on the cluster. Yesterday all of a sudden we had a rebootless eviction on 2nd node (all the services got restarted, cluster, asm and databases).
I noticed these in crs alert log:
2020-11-09 12:55:17.315 [OCSSD(10243)]CRS-1615: No I/O has completed after 50% of the maximum interval. Voting file /opt/oracle/clufiles/ocrvfdgdisk0 will be considered not functional in 99160 milliseconds
2020-11-09 12:56:37.325 [OCSSD(10243)]CRS-1613: No I/O has completed after 90% of the maximum interval. Voting file /opt/oracle/clufiles/ocrvfdgdisk0 will be considered not functional in 19150 milliseconds
2020-11-09 12:56:57.330 [OCSSD(10243)]CRS-1604: CSSD voting file is offline: /opt/oracle/clufiles/ocrvfdgdisk0; details at (:CSSNM00058:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc.
2020-11-09 12:56:57.334 [OCSSD(10243)]CRS-1606: The number of voting files available, 0, is less than the minimum number of voting files required, 1, resulting in CSSD termination to ensure data integrity; details at (:CSSNM00018:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc
2020-11-09 12:56:57.343 [OCSSD(10243)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc
2020-11-09 12:56:58.220 [OCSSD(10243)]CRS-1652: Starting clean up of CRSD resources.
2020-11-09 12:57:00.996 [OCSSD(10243)]CRS-1605: CSSD voting file is online: /opt/oracle/clufiles/ocrvfdgdisk0; details in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc.
2020-11-09 12:57:02.889 [OCSSD(10243)]CRS-1654: Clean up of CRSD resources finished successfully.
2020-11-09 12:57:02.936 [OCSSD(10243)]CRS-1655: CSSD on node scxora101 detected a problem and started to shutdown.
2020-11-09 12:57:04.038 [OCSSD(10243)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 10243 experienced fatal signal or exception code 6.