This discussion is archived
4 Replies Latest reply: Nov 11, 2013 2:34 AM by Pradeepcmst RSS

oracle cluster verification utility failed

992918 Newbie
Currently Being Moderated

Hi All,

 

I am installing oracle cluster 11g r2 on two nodes. SolNode1 and SolNode2. however the oracle cluster verification utility is failed and there is not much detail in the installation log. following is the contents from installation log generated inside oraInventory

 

INFO: WARNING:

INFO: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

INFO: OLR integrity check passed

INFO: OCR detected on ASM. Running ACFS Integrity checks...

INFO: Starting check to see if ASM is running on all cluster nodes...

INFO: ASM Running check passed. ASM is running on all specified nodes

INFO: Starting Disk Groups check to see if at least one Disk Group configured...

INFO: Disk Group Check passed. At least one Disk Group configured

INFO: Task ACFS Integrity check passed

INFO: Checking Oracle Cluster Voting Disk configuration...

INFO: ASM Running check passed. ASM is running on all specified nodes

INFO: Oracle Cluster Voting Disk configuration check passed

INFO: User "grid" is not part of "root" group. Check passed

INFO: Checking if Clusterware is installed on all nodes...

INFO: Check of Clusterware install passed

INFO: Checking if CTSS Resource is running on all nodes...

INFO: CTSS resource check passed

INFO: Querying CTSS for time offset on all nodes...

INFO: Query of CTSS for time offset passed

INFO: Check CTSS state started...

INFO: CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

INFO: Check of clock time offsets passed

INFO: Oracle Cluster Time Synchronization Services check passed

INFO: Checking VIP configuration.

INFO: Checking VIP Subnet configuration.

INFO: Check for VIP Subnet configuration passed.

INFO: Checking VIP reachability

INFO: Starting check for The SSH LoginGraceTime setting ...

INFO: PRVE-0037 : LoginGraceTime setting passed on node "SolNode2"

INFO: PRVE-0037 : LoginGraceTime setting passed on node "SolNode1"

INFO: Check for The SSH LoginGraceTime setting passed

INFO: Post-check for cluster services setup was unsuccessful.

INFO: Checks did not pass for the following node(s):

INFO: SolNode2,SolNode1

INFO:

WARNING:

INFO: Completed Plugin named: Oracle Cluster Verification Utility

 

and the host files on both nodes are as below.

 

::1     localhost

127.0.0.1       localhost

 

 

#Public IPs

192.168.56.50   SolNode1        loghost

192.168.56.51   SolNode2        loghost

 

#Private IPs

10.10.10.1      SolNode1-piv

10.10.10.2      SolNode2-piv

 

#Virtaul IPs

192.168.56.60   SolNode1-vip

192.168.56.61   SolNode2-vip

 

#SCAN IP

192.168.56.70   cluster01-scan

 

I am wondering if there is anything wrong with Virtual IPs. Requesting all experts to help me on resolving of this issue.

 

Thanks,

  • 1. Re: oracle cluster verification utility failed
    mkeskin Explorer
    Currently Being Moderated

    Hi ,

     

    Could you run below command ? And send us output .

     

    ./runcluvfy.sh stage -post hwos -n SolNode2,SolNode1

  • 2. Re: oracle cluster verification utility failed
    992918 Newbie
    Currently Being Moderated

    Thanks for reply.

     

    following is the output for the command.

     

     

     

    Performing post-checks for hardware and operating system setup

     

     

    Checking node reachability...

    Node reachability check passed from node "SolNode1"

     

     

     

     

    Checking user equivalence...

    User equivalence check passed for user "grid"

     

     

    Checking node connectivity...

     

     

    Checking hosts config file...

     

     

    Verification of the hosts config file successful

     

     

    Node connectivity passed for subnet "192.168.56.0" with node(s) SolNode2,SolNode1

    TCP connectivity check passed for subnet "192.168.56.0"

     

     

    Node connectivity passed for subnet "10.0.0.0" with node(s) SolNode2,SolNode1

    TCP connectivity check passed for subnet "10.0.0.0"

     

     

    Node connectivity passed for subnet "169.254.0.0" with node(s) SolNode2,SolNode1

    TCP connectivity check passed for subnet "169.254.0.0"

     

     

     

     

    Interfaces found on subnet "169.254.0.0" that are likely candidates for VIP are:

    SolNode2 e1000g1:169.254.59.232

    SolNode1 e1000g1:169.254.152.107

     

     

    Interfaces found on subnet "192.168.56.0" that are likely candidates for a private interconnect are:

    SolNode2 e1000g0:192.168.56.51 e1000g0:192.168.56.61

    SolNode1 e1000g0:192.168.56.50 e1000g0:192.168.56.60 e1000g0:192.168.56.70

     

     

    Interfaces found on subnet "10.0.0.0" that are likely candidates for a private interconnect are:

    SolNode2 e1000g1:10.10.10.2

    SolNode1 e1000g1:10.10.10.1

     

     

    Node connectivity check passed

     

     

    Check for multiple users with UID value 0 passed

    Time zone consistency check passed

     

     

    Checking shared storage accessibility...

     

     

    WARNING:

    Unable to determine the sharedness of /dev/dsk/c0t1d0s0 on nodes:

      SolNode2,SolNode1

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t2d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t3d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t4d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t5d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t6d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t7d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t8d0s0                     SolNode2 SolNode1      

     

     

      Disk                                  Sharing Nodes (2 in count)

      ------------------------------------  ------------------------

      /dev/dsk/c0t9d0s0                     SolNode2 SolNode1      

     

     

     

     

    Shared storage check was successful on nodes "SolNode2,SolNode1"

     

     

    Post-check for hardware and operating system setup was successful.

     

     

    One more thing, following errors have also appeared in the cluster installation log. as i am using vm machine and not able to allocate enough memory to meet the requirement. can you please also let me know if i can ignore these errors. 

     

    INFO: INFO: Verification Result for Node:SolNode2

    INFO: INFO: Expected Value:4GB (4194304.0KB)

    INFO: INFO: Actual Value:2.2461GB (2355200.0KB)

    INFO: INFO: Error Message:PRVF-7530 : Sufficient physical memory is not available on node "SolNode2" [Required physical memory = 4GB (4194304.0KB)]

    INFO: INFO: Cause: Amount of physical memory (RAM) found does not meet minimum memory requirements.

    INFO: INFO: Action: Add physical memory (RAM) to the node specified.

     

     

    INFO: INFO: Verification Result for Node:SolNode1

    INFO: INFO: Expected Value:4GB (4194304.0KB)

    INFO: INFO: Actual Value:3.4242GB (3590520.0KB)

    INFO: INFO: Error Message:PRVF-7573 : Sufficient swap size is not available on node "SolNode1" [Required = 4GB (4194304.0KB) ; Found = 3.4242GB (3590520.0KB)]

    INFO: INFO: Cause: The swap size found does not meet the minimum requirement.

    INFO: INFO: Action: Increase swap size to at least meet the minimum swap space requirement.

     

    Thanks,

  • 3. Re: oracle cluster verification utility failed
    AjithPathiyil Newbie
    Currently Being Moderated

    Hi 992918,

     

    A 2 node RAC cluster would atleast require 4GB of RAM (Sometimes 3GB is also enough) because of below reason and there is no workaround for ignoring this minimum requirement, Since your host & guest OS has only 2GB RAM to share.

     

    I understand that you are trying to setup a RAC on VM. (Host operating system also shares the same physical memory you have i.e. approx 2GB)

  • 4. Re: oracle cluster verification utility failed
    Pradeepcmst Journeyer
    Currently Being Moderated

    Hi,

    The error says you do not sufficient RAM and swap size on specified node,? Please check that as well.

     

     

     

    Regards,

    Pradeep. V

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points