This discussion is archived
3 Replies Latest reply: Oct 23, 2012 1:56 AM by Sebastian Solbach (DBA Community) RSS

root.sh failing on node2 on grid installation

706300 Newbie
Currently Being Moderated
on node1 execution of root.sh completed successfully. once it is complete then below steps were executed on node2

on node2  on 1st execution of root.sh



CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded

DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15003: diskgroup "DATA" already mounted in another lock name space

Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM



once we deconfig & rerun  (./rootcrs.pl -verbose -deconfig -force)


CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded

Disk Group DATA already exists. Cannot be created again

Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM


after this everytime 2nd error comes until we do below steps. after below step 1st error comes again 1 time

/etc/init.d/oracleasm deletedisk VOTE (used as votedisk to add in grid)
/etc/init.d/oracleasm createdisk vote /dev/sdd1


I even deinstalled grid completely & tried with a fresh start. but failing on 2nd node with same error for root.sh

any help will be highly appreciated


Note: -
configuration,
vmware workstation 7
guest operating system= redhat 5.4 x86
grid = 11.2.0.1

/etc/hosts
192.168.78.51 node1 node1.rac.com
192.168.78.61 node1-vip node1-vip.rac.com
172.16.100.51 node1-priv node1-priv.rac.com
192.168.78.52 node2 node2.rac.com
192.168.78.62 node2-vip node2-vip.rac.com
172.16.100.52 node2-priv node2-priv.rac.com
192.168.78.250 node-scan node-scan.rac.com



Thanks & regards.
Nilesh
  • 1. Re: root.sh failing on node2 on grid installation
    585179 Expert
    Currently Being Moderated
    Hi,

    Have a read note "root.sh Fails on the First Node for 11gR2 GI Installation [ID 1191783.1]" to get the idea what could be the problem


    Cheers
  • 2. Re: root.sh failing on node2 on grid installation
    706300 Newbie
    Currently Being Moderated
    1191783.1

    I had checked above note. The issue is similar that on node2 +ASM1 instance is started. But steps provided are not resolving my case.

    Also I had waited for root.sh to be complete on 1st node before proceeding on node2 (root.sh is failing on node2).
  • 3. Re: root.sh failing on node2 on grid installation
    Sebastian Solbach (DBA Community) Guru
    Currently Being Moderated
    Hi,

    rootcrs.pl -deconfig -force

    will only delete the cluster configuration, but will leave the ASM diskgroup intact.
    So running rootcrs.pl after that again, will definitely fail with the expected error, that ASM diskgroup already exists.

    To totally cleanup clusterware and the configured diskgroup use the -lastnode flag:

    rootcrs.pl -deconfig -force -lastnode

    which will also destroy the ASM diskgroup (but also the data on this diskgroup - just to warn you).

    After that you can try to run the configuration again.
    However this only might solve the main problem: Why did it fail the first time creating the ASM diskgroup?
    I assume you have a permission problem on the disks, which prevents ASM to create the diskgroup correctly.

    PS: If this is 11.2.0.3 you don't need to deconfigure the clusterstack, if root.sh failed (though I would recommend it now after so many failed tries).
    If it fails the first time it writes a checkpoint, and can continue from that checkpoint....
    However the root problem should be solved first.

    If it fails again on the first root.sh (after cleaning up everything), post the logfiles it creates (especially the ASM alert.log) in a new forum post.

    Regards
    Sebastian



    Regards
    Sebastian

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points