This content has been marked as final. Show 3 replies
rootcrs.pl -deconfig -force
will only delete the cluster configuration, but will leave the ASM diskgroup intact.
So running rootcrs.pl after that again, will definitely fail with the expected error, that ASM diskgroup already exists.
To totally cleanup clusterware and the configured diskgroup use the -lastnode flag:
rootcrs.pl -deconfig -force -lastnode
which will also destroy the ASM diskgroup (but also the data on this diskgroup - just to warn you).
After that you can try to run the configuration again.
However this only might solve the main problem: Why did it fail the first time creating the ASM diskgroup?
I assume you have a permission problem on the disks, which prevents ASM to create the diskgroup correctly.
PS: If this is 220.127.116.11 you don't need to deconfigure the clusterstack, if root.sh failed (though I would recommend it now after so many failed tries).
If it fails the first time it writes a checkpoint, and can continue from that checkpoint....
However the root problem should be solved first.
If it fails again on the first root.sh (after cleaning up everything), post the logfiles it creates (especially the ASM alert.log) in a new forum post.