Forum Stats

  • 3,722,125 Users
  • 2,244,228 Discussions


Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

rebootless eviction


In our environment, we have cluster installed on 2 node (RHEL 7.x 64-bit). We have 5 databases (4 on and 1 on running on the cluster. Yesterday all of a sudden we had a rebootless eviction on 2nd node (all the services got restarted, cluster, asm and databases).

I noticed these in crs alert log:

2020-11-09 12:55:17.315 [OCSSD(10243)]CRS-1615: No I/O has completed after 50% of the maximum interval. Voting file /opt/oracle/clufiles/ocrvfdgdisk0 will be considered not functional in 99160 milliseconds


2020-11-09 12:56:37.325 [OCSSD(10243)]CRS-1613: No I/O has completed after 90% of the maximum interval. Voting file /opt/oracle/clufiles/ocrvfdgdisk0 will be considered not functional in 19150 milliseconds


2020-11-09 12:56:57.330 [OCSSD(10243)]CRS-1604: CSSD voting file is offline: /opt/oracle/clufiles/ocrvfdgdisk0; details at (:CSSNM00058:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc.


2020-11-09 12:56:57.334 [OCSSD(10243)]CRS-1606: The number of voting files available, 0, is less than the minimum number of voting files required, 1, resulting in CSSD termination to ensure data integrity; details at (:CSSNM00018:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc

2020-11-09 12:56:57.343 [OCSSD(10243)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc

2020-11-09 12:56:58.220 [OCSSD(10243)]CRS-1652: Starting clean up of CRSD resources.

2020-11-09 12:57:00.996 [OCSSD(10243)]CRS-1605: CSSD voting file is online: /opt/oracle/clufiles/ocrvfdgdisk0; details in /opt/oracle/diag/crs/scxora101/crs/trace/ocssd.trc.

2020-11-09 12:57:02.889 [OCSSD(10243)]CRS-1654: Clean up of CRSD resources finished successfully.

2020-11-09 12:57:02.936 [OCSSD(10243)]CRS-1655: CSSD on node scxora101 detected a problem and started to shutdown.

2020-11-09 12:57:04.038 [OCSSD(10243)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 10243 experienced fatal signal or exception code 6.



  • SirajGulam
    SirajGulam Member Posts: 9 Red Ribbon

    Could someone shed light on the issue and let me know why this would happen..

    We use NFS for shared disks, but we have voting disks on ASM.

    Would it be a storage or network issue?

    Not sure if volume was fully consumed triggering the issue?

    ASMCMD> lsdg

    State   Type   Rebal Sector Logical_Sector Block      AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name

    MOUNTED EXTERN N        512            512  4096 4194304    47568    3396               0           3396             0            Y OCRVFDG/

    ASMCMD> du

    Used_MB     Mirror_used_MB

     43988              43988

    ASMCMD> cd _mgmtdb/

    ASMCMD> du .

    Used_MB     Mirror_used_MB

     43796              43796


Sign In or Register to comment.