This content has been marked as final. Show 13 replies
Normally mapper device rights are setup in udev/rules.d/50-*.permission. If you create an udev/rules.d/99-oracle.permission where you change the rights this should work. (In SLES10 and RHEL5 number must be higher).
Can you post your udev rules and the filename where you do this?.
I already made an attempt with 99-xxxxx.rules, with the following contents:
KERNEL=="/dev/mapper/crs*", GROUP="dba", MODE="660"
but no way to see group ownership change.
I used /dev/mapper/crs* as I am binding wwn's (from netapp luns) to crs* in multipath.conf.
Both voting and ocr device are now /dev/mapper/crs (not anymore raw binding).
I think the issue is related to multipath usage, as at install phase (when OUI needed raw devices) I was able to set permission on raw devices using udev.
By the way, I also tried with a 99-xxxxx.permission, and also with :
DEVPATH=="/dev/mapper/crs*", GROUP="dba", MODE="660"
Permissions on raw devices get reset on reboot, especially on RH or OEL. I usually get around this problem by adding commands to /etc/rc.local file allowing the required permissions on those raw devices to oracle. May be you can give it a try.
In fact something like this was the w/a I found.
I made a chgrp of required devices inside the /etc/init.d/init.crs, and this solution work.
Maybe it is safer to add the chgrp into rc.local, so that I'll be safe in case of an upgrade of the crs stack.
Thanks all, and best regards.
I do the change in rc.local as well.
chown oracle:dba /dev/raw/raw*
chmod 660 /dev/raw/raw*
And you should be sorted. Although you may need to change the user and group to suit your configuration
Can U describe the steps for binding multipath devices to raw devices?
We've tried numerous ways of configuring /dev/mapper/mpathX device to raw device via UDEV in RHEL 5.1, but to no avail...UDEV does NOT seem to honor these rules..
For e.g., here is our /etc/udev/rules.d/61-raw.rules
ACTION=="add", KERNEL=="/dev/mapper/mpath24", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="/dev/mapper/mpath26", RUN+="/bin/raw /dev/raw/raw3 %N"
Also here is our 99-raw.rules
KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="640"
KERNEL=="raw3", OWNER="oracle", GROUP="oinstall", MODE="660"
But it does NOT work. Meaning, UDEV does NOT create a raw device binding. We've tried to create it using command line as well...For e.g.,
raw /dev/mpath/mpath24 /dev/raw/raw1
Unsupported raw device name '/dev/mpath/mpath24' (format is /dev/raw/rawN))
Will appreciate some detailed instructions.
Thanks & Regards,
correct syntax for binding raw devices is:
raw /dev/raw/raw1 /dev/mpath/mpath24.
I used the above syntax, binding raw to "mapper" devices.
Remember to create at least one partition and initialize it (with dd), and to assign correct permission/ownership on raw devices.
After the install phase, I changed bot voting and ocr device to point to block devices (/dev/mpath/....).
Hope this helps.
Just for completeness,
udev rules didn't work in my env too.
I made raw bindings and change ownership/perms from CLI.
RH AS4/u6 - /etc/udev/permissions.d/40-udev.permissions:
#asm raw devices
this could work in a single node configuration.
If you work on a cluster, you can't work with /dev/dm-'s, because you cannot be sure that the same dm on both machines point to the same (multipathed) device on the storage.
So, you should use /dev/mpath/xxx devices, and there is no way to change ownership/permission on them through udev.
Permissions using udev-rules must be set on everymachine:
[root@centos51-rac-1 ~]# cd /etc/udev/rules.d
Add to the file 50-udev.rules at the bottom of the file:
[root@centos51-rac-1 rules.d]# vi 50-udev.rules
KERNEL==”sdb1”, OWNER=”root”, GROUP=”oinstall”, MODE=”0640”
KERNEL==”sdb2”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0640”
(sdb1 & sdb2 need to be changed to your devices!)
when using -legacy- raw devices:
In 60-raw.rules, add the following lines:
[root@centos51-rac-1 rules.d]# vi 60-raw.rules
ACTION==”add”, KERNEL==”sdb1”, RUN+=”/bin/raw /dev/raw/raw1 %N”
ACTION==”add”, KERNEL==”sdb2”, RUN+=”/bin/raw /dev/raw/raw2 %N”
(sdb1 & sdb2 need to be changed to your devices!)
Create a new file called 99-raw.rules, and enter the following information:
[root@centos51-rac-1 rules.d]# vi 99-raw.rules
KERNEL==”raw1”, OWNER=”root”, GROUP=”oinstall”, MODE=”0640”
KERNEL==”raw2”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0640”
Are you using multipath in your conf?
I had no issues mapping "single-pathed" devices with raw and changing ownership/permission.
My problem was about changing perms on devices under /dev/mpath/, which are devices under my full control on multipath.conf.
Anyway, I solved my issues using rc.local to change perms on "multipathed" block devices used for voting and ocr, and using asmlib to manage devices under ASM.
I just installed 10gR2 RAC under RHEL5.2. I used device--mapper to create aliases for my multipathed ocr and voting devices. I set up the multipath.conf to persistently name my devices based on the SCSI_ID. This ID can be obtained from the /var/lib/multipath/bindings file. I then used the raw command (raw /dev/raw/raw1 /dev/mapper/ocrvote4p1) to create raw devices for use with the Oracle Installer. I set the proper ownership/permissions on the raw devices and kicked off the the installer. Once the install completed (before running the root.sh file), I edited the rootconfig file (under the install directory), changed the the /dev/raw/rawX references to the multipath device aliases, applied patch 4679769 to resolve the clsfmt issue with multiple paths, and executed the root.sh file on each cluster node. The Cluster started on all nodes without any issues and without any references to raw devices in the OCR (except for the VIPCA issues with LD_ASSUME_KERNEL and non-routable VIP addresses). I resolved those issues by clearing the LD_ASSUME_KERNAL environment variable in vipca/srvctl and running VIPCA manually after the oifcfg configuration assistant completed.
I am still working on determining the proper way to set the permissions of my multipath device aliases. I currently set them in the /etc/rc.local script, but that seems rather kludgy.