2 Replies Latest reply: Oct 6, 2012 2:05 PM by AshishShukla RSS

    Shared storage check failed on nodes

    961013
      hi friends,

      I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command

      ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.


      node connectivity check failed.


      Checking shared storage accessibility...

      WARNING:
      Unable to determine the sharedness of /dev/sde on nodes:
      rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1


      Shared storage check failed on nodes "rac2,rac1"

      please help me anyone ,it's urgent


      Thanks,
      poorna.

      Edited by: 958010 on 3 Oct, 2012 9:47 PM
        • 1. Re: Shared storage check failed on nodes
          dataseven
          hi,

          pls have a look at this topic

          Node connectivity check failed. (RAC)

          regards,
          • 2. Re: Shared storage check failed on nodes
            AshishShukla
            Hello,

            It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.


            Steps to configure Two Node 10 RAC on RHEL-4
            -------------------------------------------------------------------

            Remark-1: H/W requirement for RAC

            a) 4 Machines
            1. Node1
            2. Node2
            3. storage
            4. Grid Control

            b) 2 switchs

            c) 6 straight cables


            Remark-2: S/W requirement for RAC

            a) 10g cluserware

            b) 10g database

            Both must have the same version like (10.2.0.1.0)


            Remark-3: RPMs requirement for RAC

            a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)

            b) 4 new rpms are required for installations

            1. compat-gcc-7.3-2.96.128.i386.rpm
            2. compat-gcc-c++-7.3-2.96.128.i386.rpm
            3. compat-libstdc++-7.3-2.96.128.i386.rpm
            4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm



            ------------ Start Machine Preparation --------------------



            1. Prepare 3 machines
            ----------------------

            i. node1.oracle.com

            etho (192.9.201.183) - for public network
            eht1 (10.0.0.1) - for private n/w
            gateway (192.9.201.1)
            subnet (255.255.255.0)

            ii. node2.oracle.com

            etho (192.9.201.187) - for public network
            eht1 (10.0.0.2) - for private n/w
            gateway (192.9.201.1)
            subnet (255.255.255.0)


            iii. openfiler.oracle.com

            etho (192.9.201.182) - for public network
            gateway (192.9.201.1)
            subnet (255.255.255.0)


            NOTE:-

            -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
            -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2


            2. network Configuration
            -----------------------

            #vim /etc/host

            192.9.201.183 node1.oracle.com node1
            192.9.201.187 node2.oracle.com node2
            192.9.201.182 openfiler.oracle.com openfiler

            10.0.0.1 node1-priv.oracle.com node1
            10.0.0.2 node2-priv.oracle.com node2-priv


            192.9.201.184 node1-vip.oracle.com node1-vip
            192.9.201.188 node2-vip.oracle.com node2-vip



            2. Prepare Both the nodes for installation
            -------------------------------------------

            a. Set Kernel Parameters (/etc/sysctl.conf)

            kernel.shmall = 2097152
            kernel.shmmax = 2147483648
            kernel.shmmni = 4096
            kernel.sem = 250 32000 100 128
            fs.file-max = 65536
            net.ipv4.ip_local_port_range = 1024 65000
            net.core.rmem_default = 262144
            net.core.rmem_max = 262144
            net.core.wmem_default = 262144
            net.core.wmem_max = 262144

            b. Configure /etc/security/limits.conf file

            oracle soft nproc 2047
            oracle hard nproc 16384
            oracle soft nofile 1024
            oracle hard nofile 65536

            c. Configure /etc/pam.d/login file

            session required /lib/security/pam_limits.so

            d. Create user and groups on both nodes

            # groupadd oinstall
            # groupadd dba
            # groupadd oper
            # useradd -g oinstall -G dba oracle
            # passwd oracle

            e. Create required directories and set the ownership and permission.

            # mkdir –p /u01/crs1020
            # mkdir –p /u01/app/oracle/product/10.2.0/asm
            # mkdir –p /u01/app/oracle/product/10.2.0/db_1
            # chown –R oracle:oinstall /u01/
            # chmod –R 755 /u01/

            f. Set the environment variables

            $ vi .bash_profile
            ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
            ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
            #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
            #LANG=”en_US”; export LANG




            3. storage configuration
            -------------------------


            PART-A Open-filer Set-up
            -------------------------

            Install openfiler on a machine (Leave 60GB free space on the hdd)

            a) Login to root user


            b) Start iSCSI target service

            # service iscsi-target start
            # chkconfig –level 345 iscsi-target on



            PART –B Configuring Storage on openfiler
            ----------------------------------------

            a) From any client machine open the browser and access openfiler console (446 ports).

            https://192.9.201.182:446/

            b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).

            c) From the Volume tab click "create a new physical volume group".

            d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.

            e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.

            f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"

            g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".

            h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.

            i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.

            j) then goto the 'LUN Mapping" and click on "MAP".

            k) then goto the "Network ACL" and allow both node from there and click on UPDATE.



            Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.



            f) install iscsi-initiator rpm on both nodes to acces iscsi disk

            #rpm -ivh iscsi-initiator-utils-----------

            g) Make entry in iscsi.conf file about openfiler on both nodes.

            #vim /etc/iscsi.conf (in RHEL-4)

            and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
            OR

            #vim /etc/iscsi/iscsi.conf (in RHEL-5)

            and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.

            g) #service iscsi restart (on both nodes)

            h) From both Nodes fire this command to access volume of openfiler-

            # iscsiadm -m discovery -t sendtargets -p 192.2.201.182

            i) #service iscsi restart (on both nodes)

            j) #chkconfig –level 345 iscsi on (on both nodes)

            k) make the partition 3 primary and 1 extended and within extended make 11 logical partition

            A. Prepare partitions

            1. #fdisk /dev/sdb
            ……
            :e (extended)
            Part No. 1
            First Cylinder:
            Last Cylinder:
            :p
            :n
            :l
            First Cylinder:
            Last Cylinder: +1024M
            …………………
            ……………………
            …………………………..
            2. Note the /dev/sdb* names.
            3. #partprobe
            4. Login as root user on node2 and run partprobe

            B. On node1 login as root user and create following raw devices

            # raw /dev/raw/raw5 /dev/sdb5
            #raw /dev/raw/taw6 /dev/sdb6
            ……………………………….
            ……………………………….
            # raw /dev/raw/raw12 /dev/sdb12

            Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above

            -Repeat the same thing on node2

            C. On node1 as root user

            # vi .etc/sysconfig/rawdevices
            /dev/raw/raw5 /dev/sdb5
            /dev/raw/raw6 /dev/sdb6
            /dev/raw/raw7 /dev/sdb7
            /dev/raw/raw8 /dev/sdb8
            /dev/raw/raw9 /dev/sdb9
            /dev/raw/raw10 /dev/sdb10
            /dev/raw/raw11 /dev/sdb11
            /dev/raw/raw12 /dev/sdb12
            /dev/raw/raw13 /dev/sdb13
            /dev/raw/raw14 /dev/sdb14
            /dev/raw/raw15 /dev/sdb15

            D. Restart the raw service (# service rawdevices restart)

            #service rawdevices restart

            Assigning devices:
            /dev/raw/raw5 --> /dev/sdb5
            /dev/raw/raw5: bound to major 8, minor 21
            /dev/raw/raw6 --> /dev/sdb6
            /dev/raw/raw6: bound to major 8, minor 22
            /dev/raw/raw7 --> /dev/sdb7
            /dev/raw/raw7: bound to major 8, minor 23
            /dev/raw/raw8 --> /dev/sdb8
            /dev/raw/raw8: bound to major 8, minor 24
            /dev/raw/raw9 --> /dev/sdb9
            /dev/raw/raw9: bound to major 8, minor 25
            /dev/raw/raw10 --> /dev/sdb10
            /dev/raw/raw10: bound to major 8, minor 26
            /dev/raw/raw11 --> /dev/sdb11
            /dev/raw/raw11: bound to major 8, minor 27
            /dev/raw/raw12 --> /dev/sdb12
            /dev/raw/raw12: bound to major 8, minor 28
            /dev/raw/raw13 --> /dev/sdb13
            /dev/raw/raw13: bound to major 8, minor 29
            /dev/raw/raw14 --> /dev/sdb14
            /dev/raw/raw14: bound to major 8, minor 30
            /dev/raw/raw15 --> /dev/sdb15
            /dev/raw/raw15: bound to major 8, minor 31
            done

            E. Repeat the same thing on node2 also


            F. To make these partitions accessible to oracle user fire these commands from both Nodes.

            # chown –R oracle:oinstall /dev/raw/raw*
            # chmod –R 755 /dev/raw/raw*

            F. To make these partitions accessible after restart make these entry on both nodes

            # vi /etc/rc.local
            Chown –R oracle:oinstall /dev/raw/raw*
            Chmod –R 755 /dev/raw/raw*





            4. SSH configuration (User quivalence)
            --------------------

            On node1:- $ssh-keygen –t rsa
            $ssh-keygen –t dsa

            On node2:- $ssh-keygen –t rsa
            $ssh-keygen –t dsa

            On node1:- $cd .ssh
            $cat *.pub>>node1

            On node2:- $cd .ssh
            $cat *.pub>>node2

            On node1:- $scp node1 node2:/home/oracle/.ssh
            On node2:- $scp node2 node2:/home/oracle/.ssh

            On node1:- $cat node*>>authowized_keys
            On node2:- $cat node*>>authowized_keys

            Now test the ssh configuration from both nodes

            $ vim a.sh
            ssh node1 hostname
            ssh node2 hostname
            ssh node1-priv hostname
            ssh node2-priv hostname

            $ chmod +x a.sh

            $./a.sh

            first time you'll have to give the password then it never ask for password


            5. To run cluster verifier
            ----------------------------

            On node1 :-$cd /…/stage…/cluster…/cluvfy

            $./runcluvfy stage –pre crsinst –n node1,node2

            First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)

            1. compat-gcc-7.3-2.96.128.i386.rpm
            2. compat-gcc-c++-7.3-2.96.128.i386.rpm
            3. compat-libstdc++-7.3-2.96.128.i386.rpm
            4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm

            And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.