I am installing Oracle Grid Infratructure 18.104.22.168 on OEL 6.3
I reached the final step, running the root.sh. It succsessfully started the services but then I got this error: Failed to create voting files on disk group CR. Change to configuration failed, but was successfully rolled back. CRS-4000: Command Replace failed, or completed with errors. Voting file add failed Failed to add voting disks at /opt/oracle/11.2.0/grid/crs/install/crsconfig_lib.pm line 6780.
+/opt/oracle/11.2.0/grid/perl/bin/perl -I/opt/oracle/11.2.0/grid/perl/lib -I/opt/oracle/11.2.0/grid/crs/install /opt/oracle/11.2.0/grid/crs/install/rootcrs.pl execution failed+
CR Diskgroup is configured using oracleasm:
oracleasm createdisk CR /dev/sdb1
/dev/sdb1 is a partition and has been created ot the LUN /dev/sdb using fdisk.
Any idea what might be wrong? Any answers are highly appreciated. Thanks!
Edited by: CoBy on 03.01.2013 15:36
Here is part of logfile of the root.sh - tool. Meanwhile I tried deinstalling the clusterware and cleaning up, then I verified the server configuration with cluvfy and all was OK. I used also another Disk Group name. Same problem..
2013-01-04 00:24:26: Start of resource "ora.crsd" Succeeded
2013-01-04 00:24:26: Creating voting files
2013-01-04 00:24:26: Creating voting files in ASM diskgroup OCR_VOTING
2013-01-04 00:24:26: Executing crsctl replace votedisk '+OCR_VOTING'
2013-01-04 00:24:26: Executing /opt/oracle/11.2.0/grid/bin/crsctl replace votedisk '+OCR_VOTING'
2013-01-04 00:24:26: Executing cmd: /opt/oracle/11.2.0/grid/bin/crsctl replace votedisk '+OCR_VOTING'
2013-01-04 00:24:26: Command output:
Failed to create voting files on disk group OCR_VOTING.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
End Command output
oracleasm is released for this version, I installed it using yum and the software repository for this version, so it must be released. It is also working fine, so far. I think the problem is somewhere else, but i still cannot prove this.
SELinux is disabled anyway.
[root@srv01 ~]# fdisk -l /dev/sdf
Disk /dev/sdf: 524 MB, 524288000 bytes
17 heads, 59 sectors/track, 1020 cylinders
Units = cylinders of 1003 * 512 = 513536 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x65fc6b88
Device Boot Start End Blocks Id System
/dev/sdf1 1 1020 511500+ 83 Linux
Partition 1 does not start on physical sector boundary.
There are three of those /dev/sdf /dev/sdg /dev/sdh formatted the same way.
first of all you don't need to deinstall everything. Since 22.214.171.124 the root.sh script can be resumed. So simply fix the error and rerun root.sh.
Even if that fails, you still could use $GI_HOME/crs/install/rootcrs.pl -deinstall -force -lastnode to wipe everything. So no need to deinstall everything....
Regarding the error you have -
What does the ASM Log say? Maybe the Diskgroup could not be created correctly (Device permission?), or is already existent?
What more information do you see in the logfile of root.sh (or clusterware logfiles if already existent)?
I want to close the thread with following result:
I went for an installation without ASMLIB. I configured the drives using udev. For some reason the asmlib could not hanldle the permissions on the drives correctly.
After mapping with udev and some reboots to test the availability, I coulnd run the grid infrastructure installation without problems.
Thanks for all the hints.