This discussion is archived
1 2 Previous Next 21 Replies Latest reply: Apr 19, 2013 1:01 PM by user12140924 RSS

TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC

895680 Newbie
Currently Being Moderated
Hi,

runcluvfy.sh script fails on Linux node. Below are the details of checks which failed:

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed
rac1 passed

Verification of the hosts config file successful


Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.2.102 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:5E:16:D3 1500
eth1 10.10.10.51 10.10.10.0 0.0.0.0 192.168.2.1 08:00:27:44:48:6D 1500


Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.2.101 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:81:9D:76 1500
eth1 10.10.10.50 10.10.10.0 0.0.0.0 192.168.2.1 08:00:27:12:5F:F4 1500


Check: Node connectivity of subnet "192.168.2.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth0 rac1:eth0 yes
Result: Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1


Check: TCP connectivity of subnet "192.168.2.0"
Source                          Destination                     Connected?
------------------------------  ------------------------------  ----------------
rac1:192.168.2.101              rac2:192.168.2.102              failed
Result: TCP connectivity check failed for subnet "192.168.2.0"


Check: Node connectivity of subnet "10.10.10.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
rac2:eth1 rac1:eth1 yes
Result: Node connectivity passed for subnet "10.10.10.0" with node(s) rac2,rac1


Check: TCP connectivity of subnet "10.10.10.0"
Source                          Destination                     Connected?
------------------------------  ------------------------------  ----------------
rac1:10.10.10.50                rac2:10.10.10.51                failed
Result: TCP connectivity check failed for subnet "10.10.10.0"


Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
rac2 eth0:192.168.2.102
rac1 eth0:192.168.2.101

Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:
rac2 eth1:10.10.10.51
rac1 eth1:10.10.10.50

Result: Node connectivity check passed

********************************************************************************************************************************

Here are the 2 Virtual machine details:
VM used: Oracle Virtual box 4.0.10
Cluster Version: 11g R2
Linux version: RHEL5.3
No of node: 2

Node details:
Node 1 hostname: rac1
Public ip (eth0): 192.168.2.101
Subnet mask: 255.255.255.0
Default gateway: 192.168.2.1
Private ip (eth1): 10.10.10.50
Subnet mask: 255.255.255.0
Default gateway: none

Node 2 hostname : rac2
Public ip (eth0): 192.168.2.102
Subnet mask: 255.255.255.0
Default gateway: 192.168.2.1
Private ip (eth1): 10.10.10.51
Subnet mask: 255.255.255.0
Default gateway: none


Contents of /etc/hosts:(DNS is not configured)

127.0.0.1 rac2.mydomain.com rac2 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# Public
192.168.2.101 rac1.mydomain.com rac1
192.168.2.102 rac2.mydomain.com rac2
# Private
10.10.10.50 rac1-priv.mydomain.com rac1-priv
10.10.10.51 rac2-priv.mydomain.com rac2-priv
# Virtual
192.168.2.111 rac1-vip.mydomain.com rac1-vip
192.168.2.112 rac2-vip.mydomain.com rac2-vip
# SCAN
192.168.2.201 rac-scan.mydomain.com rac-scan
192.168.2.202 rac-scan.mydomain.com rac-scan
192.168.2.203 rac-scan.mydomain.com rac-scan


SSH connectivity:
*[oracle@rac1 ~]$ ssh rac2*
Last login: Fri Feb 3 23:10:06 2012 from rac1.mydomain.com

*[grid@rac2 ~]$ ssh rac1*
Last login: Fri Feb 3 23:05:27 2012 from rac2.mydomain.com


Ping command status:

On RAC2:

*[grid@rac2 ~]$ ping rac1*
PING rac1.mydomain.com (192.168.2.101) 56(84) bytes of data.
64 bytes from rac1.mydomain.com (192.168.2.101): icmp_seq=1 ttl=64 time=0.460 ms
64 bytes from rac1.mydomain.com (192.168.2.101): icmp_seq=2 ttl=64 time=0.307 ms
64 bytes from rac1.mydomain.com (192.168.2.101): icmp_seq=3 ttl=64 time=0.425 ms

--- rac1.mydomain.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.307/0.397/0.460/0.067 ms
*[grid@rac2 ~]$ ping rac1-priv*
PING rac1-priv.mydomain.com (10.10.10.50) 56(84) bytes of data.
64 bytes from rac1-priv.mydomain.com (10.10.10.50): icmp_seq=1 ttl=64 time=50.6 ms
64 bytes from rac1-priv.mydomain.com (10.10.10.50): icmp_seq=2 ttl=64 time=0.751 ms

--- rac1-priv.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.751/25.703/50.656/24.953 ms
*[grid@rac2 ~]$ ping rac2*
PING rac2.mydomain.com (127.0.0.1) 56(84) bytes of data.
64 bytes from rac2.mydomain.com (127.0.0.1): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from rac2.mydomain.com (127.0.0.1): icmp_seq=2 ttl=64 time=0.065 ms

--- rac2.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.065/0.074/0.084/0.012 ms
*[grid@rac2 ~]$ ping rac2-priv*
PING rac2-priv.mydomain.com (10.10.10.51) 56(84) bytes of data.
64 bytes from rac2-priv.mydomain.com (10.10.10.51): icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from rac2-priv.mydomain.com (10.10.10.51): icmp_seq=2 ttl=64 time=0.080 ms

--- rac2-priv.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1006ms
rtt min/avg/max/mdev = 0.039/0.059/0.080/0.021 ms




On RAC1 node:

*[oracle@rac1 ~]$ ping rac2*
PING rac2.mydomain.com (192.168.2.102) 56(84) bytes of data.
64 bytes from rac2.mydomain.com (192.168.2.102): icmp_seq=1 ttl=64 time=0.428 ms
64 bytes from rac2.mydomain.com (192.168.2.102): icmp_seq=2 ttl=64 time=0.387 ms

--- rac2.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.387/0.407/0.428/0.028 ms
*[oracle@rac1 ~]$ ping rac2-priv*
PING rac2-priv.mydomain.com (10.10.10.51) 56(84) bytes of data.
64 bytes from rac2-priv.mydomain.com (10.10.10.51): icmp_seq=1 ttl=64 time=0.552 ms
64 bytes from rac2-priv.mydomain.com (10.10.10.51): icmp_seq=2 ttl=64 time=0.528 ms

--- rac2-priv.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.528/0.540/0.552/0.012 ms
*[oracle@rac1 ~]$ ping rac1*
PING rac1.mydomain.com (127.0.0.1) 56(84) bytes of data.
64 bytes from rac1.mydomain.com (127.0.0.1): icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from rac1.mydomain.com (127.0.0.1): icmp_seq=2 ttl=64 time=0.039 ms

--- rac1.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.039/0.040/0.042/0.006 ms
*[oracle@rac1 ~]$ ping rac1-priv*
PING rac1-priv.mydomain.com (10.10.10.50) 56(84) bytes of data.
64 bytes from rac1-priv.mydomain.com (10.10.10.50): icmp_seq=1 ttl=64 time=0.095 ms
64 bytes from rac1-priv.mydomain.com (10.10.10.50): icmp_seq=2 ttl=64 time=0.035 ms

--- rac1-priv.mydomain.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.035/0.065/0.095/0.030 ms


Grid user:
[grid@rac1 grid]$ id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1020(asmadmin),1021(asmdba),1022(asmoper),1031(dba)

Database user:
[oracle@rac1 ~]$ id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1021(asmdba),1031(dba)


SSH is configured properly for both Grid and Oracle user though here i have provided o/p of only Grid user.
Ping also works fine on both the nodes then why TCP connectivity check is failing?
What I have missed in my Linux nodes configuration?

Regards,
Purnima Johari
  • 1. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Liron Amitzi Oracle ACE
    Currently Being Moderated
    Hi,
    Your hosts file configuration is invalid, I'm not sure that this is the problem, but you should fix it first and try again. In /etc/hosts you cannot have a single name pointing to different IP addresses.

    >
    127.0.0.1 rac2.mydomain.com rac2 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    # Public
    192.168.2.101 rac1.mydomain.com rac1
    192.168.2.102 rac2.mydomain.com rac2
    # Private
    10.10.10.50 rac1-priv.mydomain.com rac1-priv
    10.10.10.51 rac2-priv.mydomain.com rac2-priv
    # Virtual
    192.168.2.111 rac1-vip.mydomain.com rac1-vip
    192.168.2.112 rac2-vip.mydomain.com rac2-vip
    # SCAN
    *192.168.2.201 rac-scan.mydomain.com rac-scan*
    *192.168.2.202 rac-scan.mydomain.com rac-scan*
    *192.168.2.203 rac-scan.mydomain.com rac-scan*
    >

    Your hosts file should look like this:
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    # Public
    192.168.2.101 rac1.mydomain.com rac1
    192.168.2.102 rac2.mydomain.com rac2
    # Private
    10.10.10.50 rac1-priv.mydomain.com rac1-priv
    10.10.10.51 rac2-priv.mydomain.com rac2-priv
    # Virtual
    192.168.2.111 rac1-vip.mydomain.com rac1-vip
    192.168.2.112 rac2-vip.mydomain.com rac2-vip
    # SCAN
    192.168.2.201 rac-scan.mydomain.com rac-scan
  • 2. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi,

    I have modified /etc/hosts on both the nodes but still I am facing same TCP issue:


    [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2

    Performing pre-checks for cluster services setup

    Checking node reachability...
    Node reachability check passed from node "rac1"


    Checking user equivalence...
    User equivalence check passed for user "grid"

    Checking node connectivity...

    Checking hosts config file...

    Verification of the hosts config file successful

    Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1
    TCP connectivity check failed for subnet "192.168.2.0"

    Node connectivity passed for subnet "10.10.10.0" with node(s) rac2,rac1
    TCP connectivity check failed for subnet "10.10.10.0"

    Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
    rac2 eth0:192.168.2.102
    rac1 eth0:192.168.2.101

    Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:
    rac2 eth1:10.10.10.51
    rac1 eth1:10.10.10.50

    Node connectivity check passed

    **********************************************************************************************************************************

    These are the contents of /etc/host on rac1:

    [grid@rac1 grid]$ cat /etc/hosts
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    # Public
    192.168.2.101 rac1.mydomain.com rac1
    192.168.2.102 rac2.mydomain.com rac2
    # Private
    10.10.10.50 rac1-priv.mydomain.com rac1-priv
    10.10.10.51 rac2-priv.mydomain.com rac2-priv
    # Virtual
    192.168.2.111 rac1-vip.mydomain.com rac1-vip
    192.168.2.112 rac2-vip.mydomain.com rac2-vip
    # SCAN
    192.168.2.201 rac-scan.mydomain.com rac-scan

    Contents of /etc/hosts on rac2:

    [grid@rac2 grid]$ cat /etc/hosts
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    # Public
    192.168.2.101 rac1.mydomain.com rac1
    192.168.2.102 rac2.mydomain.com rac2
    # Private
    10.10.10.50 rac1-priv.mydomain.com rac1-priv
    10.10.10.51 rac2-priv.mydomain.com rac2-priv
    # Virtual
    192.168.2.111 rac1-vip.mydomain.com rac1-vip
    192.168.2.112 rac2-vip.mydomain.com rac2-vip
    # SCAN
    192.168.2.201 rac-scan.mydomain.com rac-scan


    Regards,
    Purnima
  • 3. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Liron Amitzi Oracle ACE
    Currently Being Moderated
    Hi Purnima,
    Did you check MOS note 1335136.1? What network adapter name you have?
    Also, what is the exact Oracle version?
    Liron
  • 4. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi,

    Following network adapter is used:

    Adapter Name: Inter(R) Centrino(R) Advanced-N 6200 AGN
    Adapter Type: Intel PRO/1000 MT Desktop (82540EM)


    I checked metalink note but that specifies network adapter virbr0 , which is not there in my case.
    Reagrds,
    Purnima
  • 5. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi,

    Oracle Version is 11.2.0.1.
    Linux Version is Rhel 5 Update 3

    Regards,
    Purnima
  • 6. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Liron Amitzi Oracle ACE
    Currently Being Moderated
    Hi,
    Please check MOS note 11071865.8, it talks about this error and says that it's a bug in 11.2.0.1 and can be ignore.
    Make sure that you see the same symptoms they describe. If so, install 11.2.0.3 (which fixes this issue) or simply ignore it.

    HTH
    Liron
  • 7. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi,

    I checked note id Bug 11071865 - Cluvfy failed with 'TCP connectivity check' [ID 11071865.8].
    As per this note, issue is reported when grid s/w is already installed . TCP connectivity error msg is notified in cvutrace.log file which is present in <GRID_HOME>/cv/log .
    Here in my case, I have yet to install Grid s/w in Linux machine. I am running runcluvfh.sg script provided with Grid s/w bundle and so the cvu trace file is not present.


    Regards,
    Purnima
  • 8. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Liron Amitzi Oracle ACE
    Currently Being Moderated
    I still think it might be related. I don't think it matters if the i is installed or not.
    I suggest you try to run the cluvfy of 11.2.0.3 and see what happens.
    By the way, why don't you install 11.2.0.3?
  • 9. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Rajesh.Rathod Explorer
    Currently Being Moderated
    HI,

    ( Just a try)
    can you try to set gateway ip address for both virtual machines (each) / node same as ip address of "host" machine ( i.e. Physical machine on which two virtual machines are configured )
    as gateway address is none

    or can you please let us know which ip address entry has made as default gateway on both virtual machines.

    Edited by: Rajesh.Rathod on Feb 6, 2012 6:49 AM

    Edited by: Rajesh.Rathod on Feb 6, 2012 7:04 AM
  • 10. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi Liron,

    I read somewhere on net that for 11.2.0.3 RAC, DNS configuration is necessary and more over I have RAM constraints.
    Anyways now I am downloading 11.2.0.3 Grid s/w and will update you soon after running runcluvfy.sh script from that s/w location.
    Will it be an issue if I go ahead with 11.2.0.3 without DNS configuration for SCAN ip addresses?


    Rajesh,

    Here are the 2 Virtual machine details:
    VM used: Oracle Virtual box 4.0.10
    Cluster Version: 11g R2
    Linux version: RHEL5.3
    No of node: 2

    Node details:
    Node 1 hostname: rac1
    Public ip (eth0): 192.168.2.101
    Subnet mask: 255.255.255.0
    Default gateway: 192.168.2.1
    Private ip (eth1): 10.10.10.50
    Subnet mask: 255.255.255.0
    Default gateway: none

    Node 2 hostname : rac2
    Public ip (eth0): 192.168.2.102
    Subnet mask: 255.255.255.0
    Default gateway: 192.168.2.1
    Private ip (eth1): 10.10.10.51
    Subnet mask: 255.255.255.0
    Default gateway: none

    Gateway address is null only for eth1 which is used for private ips. I even tried setting up Gateway address for eth0 (Public ip) same as of host machine on both the VMnodes but still issue is same.

    Regards,
    Purnima
  • 11. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi Liron,

    I read somewhere on net that for 11.2.0.3 RAC, DNS configuration is necessary and more over I have RAM constraints.
    Anyways now I am downloading 11.2.0.3 Grid s/w and will update you soon after running runcluvfy.sh script from that s/w location.
    Will it be an issue if I go ahead with 11.2.0.3 without DNS configuration for SCAN ip addresses?


    Regards,
    Purnima
  • 12. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    Liron Amitzi Oracle ACE
    Currently Being Moderated
    I haven't heard that 11.2.0.3 needs DNS. In the documentation, Oracle always recommends to use DNS, but hosts file configuration is also supported. And regarding RAM, are you sure that 11.2.0.3 requires more RAM tham 11.2.0.1? There is nothing in the documentation that says that.

    If I were you, I would go for 11.2.0.3 (and hope it will not have this problem with the cluvfy)

    Liron
  • 13. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi Liron,

    This time I tried with Gird 11.2.0.3 runcluvfy.sh script.
    Again issue. Here is the o/p:

    Performing pre-checks for cluster services setup

    Checking node reachability...
    Node reachability check passed from node "rac2"


    Checking user equivalence...
    User equivalence check passed for user "grid"

    Checking node connectivity...

    Checking hosts config file...

    Verification of the hosts config file successful

    Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1

    ERROR:
    PRVF-7617 : Node connectivity between "rac2 : 192.168.2.102" and "rac1 : 192.168.2.101" failed
    TCP connectivity check failed for subnet "192.168.2.0"

    Node connectivity passed for subnet "10.10.10.0" with node(s) rac2,rac1

    ERROR:
    PRVF-7617 : Node connectivity between "rac2 : 10.10.10.51" and "rac1 : 10.10.10.50" failed
    TCP connectivity check failed for subnet "10.10.10.0"


    Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
    rac2 eth0:192.168.2.102
    rac1 eth0:192.168.2.101

    WARNING:
    Could not find a suitable set of interfaces for the private interconnect
    Checking subnet mask consistency...
    Subnet mask consistency check passed for subnet "192.168.2.0".
    Subnet mask consistency check passed for subnet "10.10.10.0".
    Subnet mask consistency check passed.

    Node connectivity check failed

    Checking multicast communication...

    Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.101" on node "rac1"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.101" on node "rac1"
    Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.101" on node "rac1"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.101" on node "rac1"
    Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"...
    PRVG-11134 : Interface "10.10.10.51" on node "rac2" is not able to communicate with interface "10.10.10.51" on node "rac2"
    PRVG-11134 : Interface "10.10.10.51" on node "rac2" is not able to communicate with interface "10.10.10.50" on node "rac1"
    PRVG-11134 : Interface "10.10.10.50" on node "rac1" is not able to communicate with interface "10.10.10.51" on node "rac2"
    PRVG-11134 : Interface "10.10.10.50" on node "rac1" is not able to communicate with interface "10.10.10.50" on node "rac1"
    Checking subnet "10.10.10.0" for multicast communication with multicast group "224.0.0.251"...
    PRVG-11134 : Interface "10.10.10.51" on node "rac2" is not able to communicate with interface "10.10.10.51" on node "rac2"
    PRVG-11134 : Interface "10.10.10.51" on node "rac2" is not able to communicate with interface "10.10.10.50" on node "rac1"
    PRVG-11134 : Interface "10.10.10.50" on node "rac1" is not able to communicate with interface "10.10.10.51" on node "rac2"
    PRVG-11134 : Interface "10.10.10.50" on node "rac1" is not able to communicate with interface "10.10.10.50" on node "rac1"

    Checking ASMLib configuration.
    Check for ASMLib configuration passed.
    ___________________________________________________________________________________________________________________________________

    Also this time it checked for resov.conf file. O/P:

    File "/etc/resolv.conf" does not have both domain and search entries defined
    domain entry in file "/etc/resolv.conf" is consistent across nodes
    search entry in file "/etc/resolv.conf" is consistent across nodes
    All nodes have one search entry defined in file "/etc/resolv.conf"
    PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2,rac1

    File "/etc/resolv.conf" is not consistent across nodes

    Time zone consistency check passed

    Pre-check for cluster services setup was unsuccessful on all the nodes.

    ***********************************************************************************************************************************

    11.2.0.3 runcluvf.sh script is showing up many issues.
    Ping and SSH are perfectly working fine on both nodes.
    Please suggest.

    Regards,
    Purnima
  • 14. Re: TCP connectivity of subnet check failed, execuion of runcluvfy.sh,RAC
    895680 Newbie
    Currently Being Moderated
    Hi Liron,

    In my earlier post I specified below errors :

    Checking multicast communication...

    Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.102" on node "rac2" is not able to communicate with interface "192.168.2.101" on node "rac1"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.102" on node "rac2"
    PRVG-11134 : Interface "192.168.2.101" on node "rac1" is not able to communicate with interface "192.168.2.101" on node "rac1"

    As per meatlink note, Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement (Doc ID 1212703.1) its due to multicast address ports.
    I ran mcasttest.pl . Below is teh o/p:

    [grid@rac2 mcasttest]$ perl mcasttest.pl -n rac2,rac1 -i eth0,eth1
    ########### Setup for node rac2 ##########
    Checking node access 'rac2'
    Checking node login 'rac2'
    Checking/Creating Directory /tmp/mcasttest for binary on node 'rac2'
    Distributing mcast2 binary to node 'rac2'
    ########### Setup for node rac1 ##########
    Checking node access 'rac1'
    Checking node login 'rac1'
    Checking/Creating Directory /tmp/mcasttest for binary on node 'rac1'
    Distributing mcast2 binary to node 'rac1'
    ########### testing Multicast on all nodes ##########

    Test for Multicast address 230.0.1.0

    Feb 7 21:24:19 | Multicast Failed for eth0 using address 230.0.1.0:42000
    Feb 7 21:24:50 | Multicast Failed for eth1 using address 230.0.1.0:42001

    Test for Multicast address 224.0.0.251

    Feb 7 21:25:20 | Multicast Failed for eth0 using address 224.0.0.251:42002
    Feb 7 21:25:52 | Multicast Failed for eth1 using address 224.0.0.251:42003

    To rectify this, I opened all 4 UDP ports: 4200-42003 in iptables firewall. After opening all ports, mcasttest.pl script checks passed but runcluvfy.sh scripts still failed with mutlicast and TCP issue.
    I disabled firewall and re-ran runclyvfy.sh script. This time TCP and multicast address checks passed on both the nodes. I even tried with 11.2.0.1 runcluvfy script. It also passed these checks because firewall is disabled. So there is definitely something to do with firewall settings or IP address or ports for TCP connection check which I am not able to figure out.
    This site also specifies multicast address issue using 11.2.0.3 version:http://ronr.blogspot.in/2012/01/11203-grid-installation-issue-multicast.html
    My case is slightly different as I faced issues realted to both TCP connection and multicast address.
    Kindly suggest if you have any idea of this issue.
    Below is the o/p of runcluvfy.sh script:

    Performing pre-checks for cluster services setup

    Checking node reachability...

    Check: Node reachability from node "rac1"
    Destination Node Reachable?
    ------------------------------------ ------------------------
    rac2 yes
    rac1 yes
    Result: Node reachability check passed from node "rac1"


    Checking user equivalence...

    Check: User equivalence for user "grid"
    Node Name Status
    ------------------------------------ ------------------------
    rac2 passed
    rac1 passed
    Result: User equivalence check passed for user "grid"

    Checking node connectivity...

    Checking hosts config file...
    Node Name Status
    ------------------------------------ ------------------------
    rac2 passed
    rac1 passed

    Verification of the hosts config file successful


    Interface information for node "rac2"
    Name IP Address Subnet Gateway Def. Gateway HW Address MTU
    ------ --------------- --------------- --------------- --------------- ----------------- ------
    eth0 192.168.2.102 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:5E:16:D3 1500
    eth1 10.10.10.51 10.10.10.0 0.0.0.0 192.168.2.1 08:00:27:44:48:6D 1500


    Interface information for node "rac1"
    Name IP Address Subnet Gateway Def. Gateway HW Address MTU
    ------ --------------- --------------- --------------- --------------- ----------------- ------
    eth0 192.168.2.101 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:81:9D:76 1500
    eth1 10.10.10.50 10.10.10.0 0.0.0.0 192.168.2.1 08:00:27:12:5F:F4 1500


    Check: Node connectivity of subnet "192.168.2.0"
    Source                          Destination                     Connected?
    ------------------------------  ------------------------------  ----------------
    rac2[192.168.2.102]             rac1[192.168.2.101]             yes
    Result: Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1


    Check: TCP connectivity of subnet "192.168.2.0"
    Source                          Destination                     Connected?
    ------------------------------  ------------------------------  ----------------
    rac1:192.168.2.101              rac2:192.168.2.102              passed
    Result: TCP connectivity check passed for subnet "192.168.2.0"


    Check: Node connectivity of subnet "10.10.10.0"
    Source                          Destination                     Connected?
    ------------------------------  ------------------------------  ----------------
    rac2[10.10.10.51]               rac1[10.10.10.50]               yes
    Result: Node connectivity passed for subnet "10.10.10.0" with node(s) rac2,rac1


    Check: TCP connectivity of subnet "10.10.10.0"
    Source                          Destination                     Connected?
    ------------------------------  ------------------------------  ----------------
    rac1:10.10.10.50                rac2:10.10.10.51                passed
    Result: TCP connectivity check passed for subnet "10.10.10.0"


    Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
    rac2 eth0:192.168.2.102
    rac1 eth0:192.168.2.101

    Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:
    rac2 eth1:10.10.10.51
    rac1 eth1:10.10.10.50
    Checking subnet mask consistency...
    Subnet mask consistency check passed for subnet "192.168.2.0".
    Subnet mask consistency check passed for subnet "10.10.10.0".
    Subnet mask consistency check passed.

    Result: Node connectivity check passed

    Checking multicast communication...

    Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
    Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.

    Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"...
    Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed.

    Check of multicast communication passed.

    Checking ASMLib configuration.
    Node Name Status
    ------------------------------------ ------------------------
    rac2 passed
    rac1 passed
    Result: Check for ASMLib configuration passed.


    -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Also, when running 11.2.0.0 runcluvf script below check fails:

    Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
    File "/etc/resolv.conf" does not have both domain and search entries defined
    Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
    domain entry in file "/etc/resolv.conf" is consistent across nodes
    Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
    search entry in file "/etc/resolv.conf" is consistent across nodes
    Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
    All nodes have one search entry defined in file "/etc/resolv.conf"
    Checking all nodes to make sure that search entry is "mydomain.com" as found on node "rac2"
    All nodes of the cluster have same value for 'search'
    Checking DNS response time for an unreachable node
    Node Name Status
    ------------------------------------ ------------------------
    rac2 failed
    rac1 failed
    PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2,rac1

    File "/etc/resolv.conf" is not consistent across nodes


    Can this check be ignored if entries are present in /etc/hosts and DNS is not configured?


    Regards,
    Purnima
1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points