3 Replies Latest reply: Feb 20, 2014 8:29 PM by Cuong Pham RSS

Grid Infra configuration failed when running root.sh on second node

Cuong Pham Newbie
Currently Being Moderated

Hi everyone.

I am new guy on RAC environment. When trying to setup Oracle RAC on local environment, I had this problem:

- I run root.sh successfully on local first node

- after that, on first node, there are 3 virtual NICs created to listen SCAN addresses

- after that action success on first node, I tried to run that script on second node, but error occured:

 

CRS-2676: Start of 'ora.DATA.dg' on 'dbnode2' succeeded
PRCR-1079 : Failed to start resource ora.scan1.vip
CRS-5017: The resource action "ora.scan1.vip start" encountered the following error:
CRS-5005: IP Address: 192.168.50.124 is already in use in the network
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/dbnode2/agent/crsd/orarootagent_root/orarootagent_root.log".
CRS-2674: Start of 'ora.scan1.vip' on 'dbnode2' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan2.vip
CRS-5017: The resource action "ora.scan2.vip start" encountered the following error:
CRS-5005: IP Address: 192.168.50.122 is already in use in the network
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/dbnode2/agent/crsd/orarootagent_root/orarootagent_root.log".
CRS-2674: Start of 'ora.scan2.vip' on 'dbnode2' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan2.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan3.vip
CRS-5017: The resource action "ora.scan3.vip start" encountered the following error:
CRS-5005: IP Address: 192.168.50.123 is already in use in the network
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/dbnode2/agent/crsd/orarootagent_root/orarootagent_root.log".
CRS-2674: Start of 'ora.scan3.vip' on 'dbnode2' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan3.vip' on that would satisfy its placement policy
start scan ... failed
FirstNode configuration failed at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 9379.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

 

I tried again with several times (with deconfiguration) but the problems was still there. Can you explain for me ?

- why after running root.sh on first node, all of SCAN IP interfaces was created on that node? This is reason for why root.sh fails on second node.

- how to solve it ?

 

I am using local DNS server to resolves scan address to 3 IPs, and I can run runcluvfy.sh script successfully on both nodes.

 

Thank in advance

  • 1. Re: Grid Infra configuration failed when running root.sh on second node
    Vandana B - Oracle Journeyer
    Currently Being Moderated

    Hi,

     

    I see that you are using DNS, could you also confirm that you have not added the scan entries in /etc/hosts file

     

    Also, post deconfigure, are all the scan ip's getting unplumbed from the public interface? Once you deconfigure ensure that the scan ip's not pingable and have been unplumbed successfully from the public interface. Then retry running root.sh again

     

    Regards,

    Vandana - Oracle

  • 2. Re: Grid Infra configuration failed when running root.sh on second node
    Cuong Pham Newbie
    Currently Being Moderated

    Hi Mr Vandana.

    The strange thing is all of SCAN IP virtual-NIC was created on first node after run root.sh on that node. When run root.sh on second node, I face to that error. I had followed that article to deconfigure http://www.oracle-base.com/articles/rac/clean-up-a-failed-grid-infrastructure-installation.php . After deconfiguration, I tried to reboot server and re-run the config script. I do not know why root.sh create ALL OF SCAN-Interface on first node so that there are not any scan IP available to configure on second node. Have you ever meet that situation ?

     

    Regards.

  • 3. Re: Grid Infra configuration failed when running root.sh on second node
    Cuong Pham Newbie
    Currently Being Moderated

    PS:

    I am using two vmware virtual machines. After running root.sh on first node, I checked and found this funny information:

    [oracle@dbnode1 sshsetup]$ /sbin/ifconfig

    eth0      Link encap:Ethernet  HWaddr 00:0C:29:BC:43:1B

              inet addr:192.168.50.66  Bcast:192.168.50.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:febc:431b/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:249814 errors:0 dropped:0 overruns:0 frame:0

              TX packets:2956882 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:24913472 (23.7 MiB)  TX bytes:4369984705 (4.0 GiB)

     

     

    eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:BC:43:1B

              inet addr:192.168.50.120  Bcast:192.168.50.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

     

     

    eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:BC:43:1B

              inet addr:192.168.50.122  Bcast:192.168.50.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

     

     

    eth0:3    Link encap:Ethernet  HWaddr 00:0C:29:BC:43:1B

              inet addr:192.168.50.123  Bcast:192.168.50.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

     

     

    eth0:4    Link encap:Ethernet  HWaddr 00:0C:29:BC:43:1B

              inet addr:192.168.50.124  Bcast:192.168.50.255  Mask:255.255.255.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

     

     

    eth1      Link encap:Ethernet  HWaddr 00:0C:29:BC:43:25

              inet addr:192.168.29.10  Bcast:192.168.29.255  Mask:255.255.255.0

              inet6 addr: fe80::20c:29ff:febc:4325/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:471 errors:0 dropped:0 overruns:0 frame:0

              TX packets:664 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:82216 (80.2 KiB)  TX bytes:107920 (105.3 KiB)

     

     

    eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:BC:43:25

              inet addr:169.254.75.201  Bcast:169.254.255.255  Mask:255.255.0.0

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

     

     

    lo        Link encap:Local Loopback

              inet addr:127.0.0.1  Mask:255.0.0.0

              inet6 addr: ::1/128 Scope:Host

              UP LOOPBACK RUNNING  MTU:16436  Metric:1

              RX packets:10626 errors:0 dropped:0 overruns:0 frame:0

              TX packets:10626 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:0

              RX bytes:7942626 (7.5 MiB)  TX bytes:7942626 (7.5 MiB)

    I think that is cause of fail on node 2

     

    UPDATE:

    That thing is normal, I was ignored it and the installation can be continued normally. Thank you all for your helps.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points