We have a couple of servers running OVM 3.2.2 to setup a test environment. The only common space are 6 LUNS on a EMC CX3-40. I can see the luns with multipath from the servers and available on /dev/mpath
[root@dvmicovm1 bin]# multipath -ll |grep RAID
EMC_ASM05 (3600601601f911900a4250377b5b8e211) dm-5 DGC,RAID 10
EMC_ASM04 (3600601601f9119000ec7e110b9b8e211) dm-3 DGC,RAID 10
EMC_ASM03 (3600601601f91190044870613b5b8e211) dm-4 DGC,RAID 10
EMC_ASM02 (3600601601f91190080abeac9b4b8e211) dm-6 DGC,RAID 10
EMC_ASM01 (3600601601f911900ba6e016bb4b8e211) dm-2 DGC,RAID 10
EMC_OVMstorage (3600601601f9119004e22ab7ab3b8e211) dm-1 DGC,RAID 5
I can't create the clustered pool because i can not see the storage at management level. I tried to create an ocfs2 filesystem, but I can't find a good link explaining the steps to do it with this version at command line. Help is welcome
[root@dvmicovm1 mpath]# mkfs.ocfs2 -Tdatafiles -L EMC_repo -b 4K -C 4K -J size=64M -N16 /dev/mpath/EMC_OVMstoragep1
Cluster stack: classic o2cb
Overwriting existing ocfs2 partition.
mkfs.ocfs2: Unable to access cluster service while initializing the cluster
what do you mean by "at management level"? The LUNs need to be presented to the VM servers only, but to each one simultaneously. They need to show up under /dev/mapper, which seems to be the case. However, there seem to be more LUNs visible than there should be… I reckon that the ASM0x LUNs are not supposed to be used by OVM, but seem to be LUNs for some Oracle DB(s), no? If that is the case, I'd suggest to hide them from the OVM servers in the first place.
Then, you'd need at least two LUNs for OVM, one small one, depending on the cluster size, you should be able to get away with 32G max., and another one, that actually gets used as your shared storage repository. This one could be nearly as big , as you like.
So, to be able to create a clustered pool, you'd need to have access to two distinct LUNs from each OVM server. You can check, if each OVM server, has access to the LUNs by choosing the server in OVMM and change the perspective on the right pane to "Physical disks".
I'd also refrain from creating any ocsf2 fs on my own, while setting up the clusteres storage pool, since this will likely lead you into more trouble, since OVM won't overwrite an existing OCFS2 fs, if it finds one on any disk you throw at it and to get rid of the OCFS2 fs, you will like have to manually wipe the LUNs, which always means using dd, which is a program, that one always has to used with extreme caution, especially if there're LUNs available, which under no circumstances, are allowed to be deleted - see your ASM-LUNs…
Just setup a test env using an older CX3-40. Works pretty good. Everything worked for me including storage rep creation on FC LUNS using multipath. You don't need to manually create a ocfs file system. In fact, you shouldn't have ANY filesystem at all on any of the LUNS you want use for a Storage Repo. NONE. Not even a partition. Present the storage to you servers. Go to storage tab and discover storage and set admin servers. Refresh the storage and you should see the LUNS available for use.
Thanks for your answers, I finally found the issue
The multipath.conf was the issue. It was set using friendly names and aliases, and OVM rejected the file. I had to remove the multipath section of the configuration and work with simples wwid to be able to scan the disks
I'm setting a Oracle RAC cluster with ASM in the OVM cluster, this is why I have 5 extra drives presented to the system