This discussion is archived
1 2 Previous Next 16 Replies Latest reply: Aug 30, 2013 6:00 AM by 996971 RSS

RDBMS 11.2.0.2 Install fails with VIP error

740885 Newbie
Currently Being Moderated
I have already successfully installed 11.2.0.2 Grid infrastructure, and have a working 10gR2 RAC database.

I am trying to install 11.2.0.2 Database binaries, but get errors during the pre-requisite checks.

Error is
PRVF-10205 : The VIPs do not all share the same subnetwork, or the VIP subnetwork does not match that of any public network interface in the cluster
Details of interfaces are as follows
/u01/app/product/11.2.0/grid/bin/oifcfg iflist -p -n
bond0  10.180.0.0  PRIVATE  255.255.0.0
bond1  10.255.255.0  PRIVATE  255.255.255.0
bond1  169.254.0.0  UNKNOWN  255.255.0.0
and
/u01/app/product/11.2.0/grid/bin/oifcfg getif -global
bond1  10.255.255.0  global  cluster_interconnect
bond0  10.180.201.0  global  public
ifconfig returns the following
bond0     Link encap:Ethernet  HWaddr 00:1A:4B:DC:F8:54
          inet addr:10.180.201.15  Bcast:10.180.255.255  Mask:255.255.0.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:9193348 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20200012 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1000409336 (954.0 MiB)  TX bytes:30330339159 (28.2 GiB)

bond0:1   Link encap:Ethernet  HWaddr 00:1A:4B:DC:F8:54
          inet addr:10.180.201.21  Bcast:10.180.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

bond0:4   Link encap:Ethernet  HWaddr 00:1A:4B:DC:F8:54
          inet addr:10.180.201.115  Bcast:10.180.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

bond1     Link encap:Ethernet  HWaddr 00:1A:4B:DC:C4:72
          inet addr:10.255.255.15  Bcast:10.255.255.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:12504677 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11844935 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4286044266 (3.9 GiB)  TX bytes:3697250614 (3.4 GiB)

bond1:1   Link encap:Ethernet  HWaddr 00:1A:4B:DC:C4:72
          inet addr:169.254.136.7  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

eth0      Link encap:Ethernet  HWaddr 00:1A:4B:DC:F8:54
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:7795753 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20200012 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:842020036 (803.0 MiB)  TX bytes:30330339159 (28.2 GiB)
          Interrupt:169 Memory:f6000000-f6012100

eth1      Link encap:Ethernet  HWaddr 00:1A:4B:DC:C4:72
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:12504291 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11844935 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4286019502 (3.9 GiB)  TX bytes:3697250614 (3.4 GiB)
          Interrupt:169 Memory:fa000000-fa012100

eth2      Link encap:Ethernet  HWaddr 00:1A:4B:DC:F8:54
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:1397595 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:158389300 (151.0 MiB)  TX bytes:0 (0.0 b)
          Interrupt:193

eth3      Link encap:Ethernet  HWaddr 00:1A:4B:DC:C4:72
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:386 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:24764 (24.1 KiB)  TX bytes:0 (0.0 b)
          Interrupt:169

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1478358 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1478358 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:391073271 (372.9 MiB)  TX bytes:391073271 (372.9 MiB)
What could be causing this issue?

Thanks
Skulls
  • 1. Re: RDBMS 11.2.0.2 fails with VIP error
    809260 Newbie
    Currently Being Moderated
    ifconfig for one of the nodes u've pasted looks okay to me -all public links are on the same subnet.
    Suggest you double-check on the remaining nodes that all the public IPs and the VIPs are on the same subnet as this node.
    Also, ensure that the scan ip is also in the same subnet.
  • 2. Re: RDBMS 11.2.0.2 fails with VIP error
    740885 Newbie
    Currently Being Moderated
    Thanks. Yes other nodes are on same subnet.

    What is confusing is that some cluvfy utility report a problem, but others dont.

    For example, If I run the utility with options -pre dbinst I get the failure on node connectivity (as well as VIP failure)
    ./runcluvfy.sh stage -pre dbinst -n rac-node1 -verbose
    
    Performing pre-checks for database installation
    
    Checking node reachability...
    
    Check: Node reachability from node rac-node1
      Destination Node                      Reachable?
      ------------------------------------  ------------------------
      rac-node1                             yes
    Result: Node reachability check passed from node rac-node1
    
    
    Checking user equivalence...
    
    Check: User equivalence for user oracle
      Node Name                             Comment
      ------------------------------------  ------------------------
      rac-node1                             passed
    Result: User equivalence check passed for user oracle
    
    Checking node connectivity...
    
    Checking hosts config file...
      Node Name     Status                    Comment
      ------------  ------------------------  ------------------------
      rac-node1       passed
    
    Verification of the hosts config file successful
    
    
    Interface information for node rac-node1
     Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
     ------ --------------- --------------- --------------- --------------- ----------------- ------
     bond0  10.180.201.15   10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond0  10.180.201.21   10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond0  10.180.201.115  10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond1  10.255.255.15   10.255.255.0    0.0.0.0         10.180.10.1     00:1A:4B:DC:C4:72 1500
     bond1  169.254.136.7   169.254.0.0     0.0.0.0         10.180.10.1     00:1A:4B:DC:C4:72 1500
    
    
    Check: Node connectivity for interface bond1
    Result: Node connectivity passed for interface bond1
    
    Check: Node connectivity for interface bond0
    Result: Node connectivity failed for interface bond0
    
    Result: Node connectivity check failed
    .
    .
    .
    <snipped>
    But if I just run the component with comp nodecon it succeeds!
     runcluvfy.sh comp nodecon -n rac-node1 -verbose
    
    Verifying node connectivity
    
    Checking node connectivity...
    
    Checking hosts config file...
      Node Name     Status                    Comment
      ------------  ------------------------  ------------------------
      rac-node1       passed
    
    Verification of the hosts config file successful
    
    
    Interface information for node rac-node1
     Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
     ------ --------------- --------------- --------------- --------------- ----------------- ------
     bond0  10.180.201.15   10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond0  10.180.201.21   10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond0  10.180.201.115  10.180.0.0      0.0.0.0         10.180.10.1     00:1A:4B:DC:F8:54 1500
     bond1  10.255.255.15   10.255.255.0    0.0.0.0         10.180.10.1     00:1A:4B:DC:C4:72 1500
     bond1  169.254.136.7   169.254.0.0     0.0.0.0         10.180.10.1     00:1A:4B:DC:C4:72 1500
    
    
    Check: Node connectivity of subnet 10.180.0.0
      Source                          Destination                     Connected?
      ------------------------------  ------------------------------  ----------------
      rac-node1[10.180.201.15]          rac-node1[10.180.201.21]          yes
      rac-node1[10.180.201.15]          rac-node1[10.180.201.115]         yes
      rac-node1[10.180.201.21]          rac-node1[10.180.201.115]         yes
    Result: Node connectivity passed for subnet 10.180.0.0 with node(s) rac-node1
    
    
    Check: TCP connectivity of subnet 10.180.0.0
      Source                          Destination                     Connected?
      ------------------------------  ------------------------------  ----------------
      rac-node1:10.180.201.15         rac-node1:10.180.201.21         passed
      rac-node1:10.180.201.15         rac-node1:10.180.201.115        passed
    Result: TCP connectivity check passed for subnet 10.180.0.0
    
    
    Check: Node connectivity of subnet 10.255.255.0
    Result: Node connectivity passed for subnet 10.255.255.0 with node(s) rac-node1
    
    
    Check: TCP connectivity of subnet 10.255.255.0
    Result: TCP connectivity check passed for subnet 10.255.255.0
    
    
    Check: Node connectivity of subnet 169.254.0.0
    Result: Node connectivity passed for subnet 169.254.0.0 with node(s) rac-node1
    
    
    Check: TCP connectivity of subnet 169.254.0.0
    Result: TCP connectivity check passed for subnet 169.254.0.0
    
    
    Interfaces found on subnet 10.180.0.0 that are likely candidates for VIP are:
    rac-node1 bond0:10.180.201.15 bond0:10.180.201.21 bond0:10.180.201.115
    
    Interfaces found on subnet 169.254.0.0 that are likely candidates for VIP are:
    rac-node1 bond1:169.254.136.7
    
    Interfaces found on subnet 10.255.255.0 that are likely candidates for a private interconnect are:
    rac-node1 bond1:10.255.255.15
    
    Result: Node connectivity check passed
    
    Verification of node connectivity was successful.
    I'm not sure why?
  • 3. Re: RDBMS 11.2.0.2 fails with VIP error
    Levi-Pereira Guru
    Currently Being Moderated
    Hi,

    Put here your /etc/hosts.
  • 4. Re: RDBMS 11.2.0.2 fails with VIP error
    740885 Newbie
    Currently Being Moderated
    Hi,

    Hosts file is as follows
    #
    # Loopbak
    #
    127.0.0.1               localhost.localdomain localhost
    #
    # RAC Node 1
    #
    10.180.201.15           rac-node1.ddi.aus rac-node1
    10.255.255.15           rac-node1-priv.ddi.aus rac-node1-priv
    10.180.201.115          rac-node1-vip.ddi.aus rac-node1-vip
    #
    # RAC Node 2
    #
    10.180.201.16           rac-node2.ddi.aus rac-node2
    10.255.255.16           rac-node2-priv.ddi.aus rac-node2-priv
    10.180.201.116          rac-node2-vip.ddi.aus rac-node2-vip
  • 5. Re: RDBMS 11.2.0.2 fails with VIP error
    Levi-Pereira Guru
    Currently Being Moderated
    Hi,

    The error also is raised when you run the command below?

    ./runcluvfy.sh stage -pre dbinst -n all -verbose
  • 6. Re: RDBMS 11.2.0.2 fails with VIP error
    740885 Newbie
    Currently Being Moderated
    Yes it does, but not if I run
     cluvfy comp nodecon -n all
    Thanks
  • 7. Re: RDBMS 11.2.0.2 fails with VIP error
    Levi-Pereira Guru
    Currently Being Moderated
    hi

    It may seem an obvious question, but you can ping both IP VIP from both nodes?
  • 8. Re: RDBMS 11.2.0.2 fails with VIP error
    740885 Newbie
    Currently Being Moderated
    Hi,

    yes, I can ping both vips from both nodes.

    Thanks for your help.
  • 9. Re: RDBMS 11.2.0.2 fails with VIP error
    Levi-Pereira Guru
    Currently Being Moderated
    Hi,

    Try to generate a trace.

    For example:
    SRVM_TRACE=true
    export SRVM_TRACE
    ./runcluvfy.sh stage -pre dbinst -n all -verbose
    The output will be written to a file in $CV_HOME/cv/log directory.
    You may open a service request and upload the corresponding logfile to the service request.

    Levi Pereira
  • 10. Re: RDBMS 11.2.0.2 fails with VIP error
    740885 Newbie
    Currently Being Moderated
    Thanks Levi,

    I already did this a day or two ago. Unfortunately, Oracle support are pretty slow on this one.

    Thanks again.
  • 11. Re: RDBMS 11.2.0.2 fails with VIP error
    835692 Newbie
    Currently Being Moderated
    Hey,

    any news from Oracle Support about this problem. I've just updated to 11.2.0.2 and I'm having the scenario as you and I'm already opened a SR but like you said Oracle is pretty slow in this one ....

    Regards,

    Rui
  • 12. Re: RDBMS 11.2.0.2 fails with VIP error
    353089 Newbie
    Currently Being Moderated
    I had the same problem with a fresh install of Grid 11.2.0.2 on SuSE SLES11 Linux.

    Reason was that the installer mixed up the public and the private network interface. Normally, the screen +"Identify network interfaces"+ is not showed during installation.

    In the Installer Screen "Step 4 of 9" where you see the Hostnames and the Virtual IP Names, i had to choose +"Identify network interfaces"+. Then i saw:
    eth0 - private, eth1 - public and i could change it to eth0 - public, eth1 - private.

    After this, installation was successfull.

    Joachim
  • 13. Re: RDBMS 11.2.0.2 fails with VIP error
    830181 Newbie
    Currently Being Moderated
    Anyone has solution offered by Oracle to solve this issue.

    Thank
    Rajesh

    Edited by: user9311278 on Apr 30, 2011 4:32 AM
  • 14. Re: RDBMS 11.2.0.2 fails with VIP error
    996971 Newbie
    Currently Being Moderated

    Hi,

     

    The problem occures when icmp closed to your router.

    For example when:

    ping 10.0.0.1 (where 10.0.0.1 is your default gateway)

    return FALSE.

    In this case installator try to replace your NIC's from public to private.

    You should just look:

    # less /u01/app/11.2.0/grid/log/c1/agent/crsd/orarootagent_root/orarootagent_root.log

    and then tries to get IP from DHCP. And do not get it.

    Instead of DHCP address it put 169.254.x.x on the wrong NIC.

     

    Dima

1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points