This content has been marked as final. Show 3 replies
Have a read note "root.sh Fails on the First Node for 11gR2 GI Installation [ID 1191783.1]" to get the idea what could be the problem
I had checked above note. The issue is similar that on node2 +ASM1 instance is started. But steps provided are not resolving my case.
Also I had waited for root.sh to be complete on 1st node before proceeding on node2 (root.sh is failing on node2).
rootcrs.pl -deconfig -force
will only delete the cluster configuration, but will leave the ASM diskgroup intact.
So running rootcrs.pl after that again, will definitely fail with the expected error, that ASM diskgroup already exists.
To totally cleanup clusterware and the configured diskgroup use the -lastnode flag:
rootcrs.pl -deconfig -force -lastnode
which will also destroy the ASM diskgroup (but also the data on this diskgroup - just to warn you).
After that you can try to run the configuration again.
However this only might solve the main problem: Why did it fail the first time creating the ASM diskgroup?
I assume you have a permission problem on the disks, which prevents ASM to create the diskgroup correctly.
PS: If this is 22.214.171.124 you don't need to deconfigure the clusterstack, if root.sh failed (though I would recommend it now after so many failed tries).
If it fails the first time it writes a checkpoint, and can continue from that checkpoint....
However the root problem should be solved first.
If it fails again on the first root.sh (after cleaning up everything), post the logfiles it creates (especially the ASM alert.log) in a new forum post.