This content has been marked as final. Show 10 replies
runcluvfy.sh stage –post hwos –n host1,host2 -verbose
I get below error message:
Shared storage check failed on nodes "node1,node2". Verification of shared storage accessibility was unsuccessful on all the nodes
Hi Riaz,1 person found this helpful
shared storage check is pretty limited:
MySupport Note 316817.1: CLUSTER VERIFICATION UTILITY (CLUVFY) FAQ
Easiest way to check sharedness would be (on an empty device) to DD as oracle/grid user to the device and read this with dd from the other node:
Node1: dd if=<anytextdocument> of=<device> bs=1M count=10
Node2: dd if=<device> of=/tmp/text.txt bs=1M count=10
If the contents can be successfully retrieved, then the device is shared.
Please don't do this with a device with data on!
You could also check if the diskheader at least is recognized, whith simply reading the header contents with dd...
Thanks for input. Disk is raw, how can i check the sahredness?
The above post is by me. I posted it while logged in at metalink; hence OTN used the same user :)
you can use metalink username for OTN Forum also.1 person found this helpful
check the notes in metalink: hope this will helps you.
Shared disk check with the Cluster Verification Utility [ID 372358.1]
5852975 SHARED STORAGE CHECK DOESN'T DISCOVER SHARED DISKS
How To check that OS storage device files pointed to the same storage in both nodes [ID
$ cluvfy comp ssa -n rac1,rac2
Hi,1 person found this helpful
since you already logged in to mysupport, see here:
Note 372358.1: Shared disk check with the Cluster Verification Utility
which describes how cluvfy tries to do it, and how you can do it by hand.
However your disks (rdsk) (especially for OCR and Voting) should be the same, otherwise the cluster has problems identifying the disks.
It does not matter so much for ASM....
When you add a node, what you'll be doing is extending the clusterware and oracle homes from one of the already existing nodes using the script addNode.sh. So it is important that the OCR and voting disks are accessible using the same path/soft link (which ever you are using) from the new node and that they have the right permissions.
It would suffice to confirm if the disks are shared using the dd command from the new node. for ex:-
dd if=/dev/rdsk/c1t500A098286E80F61d9s0 of=/dev/null bs=1048576 count=100
For ASM disks, you can afford to have different path/links for the disks, since once you have extended your ASM home to the new node, you can change the asm_diskstring in the spfile on new node and it'll scan and figure out the disks.
Don't worry much about cluvfy, it's not the smartest utility anyways.
Thanks for the useful notes. After checking metalink note for cluvfy, i found the reason why it is not able to get the shared disk check passed: It can't get the serial# of SAN disk.
Could you guys please tell me how the method of using dd command works? I tried with different disks and everytime i get the same output:
0+1 records in
0+1 records out
Never mind. I found solution from http://www.idevelopment.info.
(1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
(2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
# ls -lL /dev/rdsk/c4t0d0* crw-r----- 1 root sys 32,256 Aug 1 11:16 /dev/rdsk/c4t0d0s0 crw-r----- 1 root sys 32,257 Aug 1 11:16 /dev/rdsk/c4t0d0s1 crw-r----- 1 root sys 32,258 Aug 1 11:16 /dev/rdsk/c4t0d0s2 crw-r----- 1 root sys 32,259 Aug 1 11:16 /dev/rdsk/c4t0d0s3 crw-r----- 1 root sys 32,260 Aug 1 11:16 /dev/rdsk/c4t0d0s4 crw-r----- 1 root sys 32,261 Aug 1 11:16 /dev/rdsk/c4t0d0s5 crw-r----- 1 root sys 32,262 Aug 1 11:16 /dev/rdsk/c4t0d0s6 crw-r----- 1 root sys 32,263 Aug 1 11:16 /dev/rdsk/c4t0d0s7 mknod /asmdisks/crs c 32 257 mknod /asmdisks/disk1 c 32 260 mknod /asmdisks/disk2 c 32 261 mknod /asmdisks/vote c 32 259 # ls -lL /asmdisks total 0 crw-r--r-- 1 root oinstall 32,257 Aug 3 09:07 crs crw-r--r-- 1 oracle dba 32,260 Aug 3 09:08 disk1 crw-r--r-- 1 oracle dba 32,261 Aug 3 09:08 disk2 crw-r--r-- 1 oracle oinstall 32,259 Aug 3 09:08 vote