4 Replies Latest reply: Jul 6, 2012 8:10 AM by user10858330 RSS

    Oracle Grid infrastructure installation issue

    user10858330
      During the installation of oracle grid infrastructural i got the following error. i could not comprehend when i came. after this error on node one every things seems to run properly. even i'm able to connect with ASM instance on node 1 but one node to i'm getting listener error. check out my current state of both node and suggest me should i need to re install all. or how can i avoid this errors again what are the reason to occur these error. i have posted the log file extract from where error begin to occur. kindly haelp how to fix them and what are the reason for these errors

      Check NODE !
      Check NODE A
       
      C:\Users\Administrator>srvctl config scan_listener
      SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
      SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
      SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
      C:\Users\Administrator>srvctl config scan_listener
      *NODE B*
      PRCR-1035 : Failed to look up CRS resource ora.LISTENER_SCAN2.lsnr for LISTENER_
      SCAN2
      PRCR-1070 : Failed to check if resource ora.LISTENER_SCAN2.lsnr is registered
      Cannot communicate with crsd
      
      NODE A
      
      SQL> connect sys@ORCL1_ASM1 as sysasm
      Enter password:
      Connected.
      SQL> select GROUP_NUMBER,DISK_NUMBER,STATE,HEADER_STATUS ,CREATE_DATE, MOUNT_DAT
      E from v$asm_disk
        2  V$ASM_DISKGROUP;
      
      GROUP_NUMBER DISK_NUMBER STATE    HEADER_STATU CREATE_DA MOUNT_DAT
      ------------ ----------- -------- ------------ --------- ---------
                 1           0 NORMAL   MEMBER       02-JUL-12 06-JUL-12
                 1           1 NORMAL   MEMBER       02-JUL-12 06-JUL-12
                 1           2 NORMAL   MEMBER       02-JUL-12 06-JUL-12
                 1           3 NORMAL   MEMBER       02-JUL-12 06-JUL-12
                 1           4 NORMAL   MEMBER       02-JUL-12 06-JUL-12
      *NODE B*
      LSNRCTL> status
      Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
      STATUS of the LISTENER
      ------------------------
      Alias                     LISTENER
      Version                   TNSLSNR for 64-bit Windows: Version 11.2.0.3.0 - Produ
      ction
      Start Date                06-JUL-2012 15:59:42
      Uptime                    0 days 0 hr. 0 min. 57 sec
      Trace Level               off
      Security                  ON: Local OS Authentication
      SNMP                      OFF
      Listener Parameter File   C:\app\11.2.0\grid\network\admin\listener.ora
      Listener Log File         C:\app\11.2.0\grid\log\diag\tnslsnr\w2008-112-rac2\lis
      tener\alert\log.xml
      Listening Endpoints Summary...
        (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
      The listener supports no services
      The command completed successfully
      C:\Users\Administrator>ocrcheck -local
      Status of Oracle Local Registry is as follows :
               Version                  :          3
               Total space (kbytes)     :     262120
               Used space (kbytes)      :       2328
               Available space (kbytes) :     259792
               ID                       : 1885283431
               Device/File Name         : C:\app\11.2.0\grid\cdata\w2008-112-rac2.olr
                                          Device/File integrity check succeeded
      
               Local registry integrity check succeeded
      
               Logical corruption check succeeded
      Following are the error which i got during the installation of GRID
      INFO: Completed Plugin named: Automatic Storage Management Configuration Assistant
      INFO: Completed 'Automatic Storage Management Configuration Assistant'
      INFO: Completed 'Automatic Storage Management Configuration Assistant'
      INFO: Started Plugin named: Oracle Cluster Verification Utility
      INFO: Found associated job
      INFO: Starting 'Oracle Cluster Verification Utility'
      INFO: Starting 'Oracle Cluster Verification Utility'
      INFO: 
      
      INFO: Performing post-checks for cluster services setup 
      
      INFO: 
      
      INFO: Checking node reachability...
      
      INFO: Node reachability check passed from node "w2008-112-rac1"
      
      INFO: 
      
      INFO: 
      
      INFO: Checking user equivalence...
      
      INFO: User equivalence check passed for user "Administrator"
      
      INFO: 
      
      INFO: Checking node connectivity...
      
      INFO: 
      
      INFO: Check: Node connectivity for interface "public"
      
      INFO: Node connectivity passed for interface "public"
      
      INFO: TCP connectivity check passed for subnet "192.168.0.0"
      
      INFO: 
      
      INFO: 
      
      INFO: Check: Node connectivity for interface "private"
      
      INFO: Node connectivity passed for interface "private"
      
      INFO: TCP connectivity check passed for subnet "192.168.1.0"
      
      INFO: 
      
      INFO: Checking subnet mask consistency...
      
      INFO: Subnet mask consistency check passed for subnet "192.168.0.0".
      
      INFO: Subnet mask consistency check passed for subnet "192.168.1.0".
      
      INFO: Subnet mask consistency check passed.
      
      INFO: 
      
      INFO: Node connectivity check passed
      
      INFO: 
      
      INFO: Checking multicast communication...
      
      INFO: 
      
      INFO: Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
      
      INFO: PRVG-11134 : Interface "192.168.0.154" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.155" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.156" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: Checking subnet "192.168.0.0" for multicast communication with multicast group "224.0.0.251"...
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.152" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.154" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.155" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.156" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.157" on node "w2008-112-rac2" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.152" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.154" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.155" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.156" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.151" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.152" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.154" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.155" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.156" on node "w2008-112-rac2"
      
      INFO: PRVG-11134 : Interface "192.168.0.153" on node "w2008-112-rac1" is not able to communicate with interface "192.168.0.151" on node "w2008-112-rac1"
      
      INFO: Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
      
      INFO: Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
      
      INFO: 
      
      INFO: Time zone consistency check passed
      
      INFO: 
      
      INFO: Checking Oracle Cluster Voting Disk configuration...
      
      INFO: 
      
      INFO: ERROR: 
      
      INFO: 
      
      INFO: PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
      
      INFO: w2008-112-rac2
      
      INFO: 
      
      INFO: Oracle Cluster Voting Disk configuration check passed
      
      INFO: 
      
      INFO: Checking Cluster manager integrity... 
      
      INFO: 
      
      INFO: 
      
      INFO: Checking CSS daemon...
      
      INFO: 
      
      INFO: ERROR: 
      
      INFO: PRVF-5302 : Failed to execute the exectask command on node "w2008-112-rac1" 
      
      INFO: C:\app\11.2.0\grid\bin\crsctl check css
      
      INFO: <CV_CMD>C:\app\11.2.0\grid\bin\crsctl check css </CV_CMD><CV_VAL>CRS-4639: Could not contact Oracle High Availability Services
      
      INFO: CRS-4000: Command Check failed, or completed with errors.
      
      INFO: </CV_VAL><CV_VRES>1</CV_VRES><CV_LOG>Exectask: runexe was successful</CV_LOG><CV_ERR>Error while running runexe</CV_ERR><CV_ERES>1</CV_ERES>
      INFO: 
      
      INFO: Oracle Cluster Synchronization Services appear to be online.
      
      INFO: 
      
      INFO: Cluster manager integrity check passed
      
      INFO: 
      
      INFO: 
      
      INFO: Checking cluster integrity...
      
      INFO: 
      
      INFO: 
      
      INFO: Cluster integrity check passed
      
      INFO: 
      
      INFO: 
      
      INFO: Checking OCR integrity...
      
      INFO: 
      
      INFO: Checking the absence of a non-clustered configuration...
      
      INFO: All nodes free of non-clustered, local-only configurations
      
      INFO: 
      
      INFO: 
      
      INFO: ERROR: 
      
      INFO: 
      
      INFO: PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
      
      INFO: 
      
      INFO:      w2008-112-rac2
      
      INFO: 
      
      INFO: ERROR: 
      
      INFO: 
      
      INFO: PRVF-4195 : Disk group for ocr location "+DATA" not available on the following nodes:
      
      INFO: 
      
      INFO:      w2008-112-rac2
      
      INFO: 
      
      INFO: NOTE: 
      
      INFO: This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
      
      INFO: 
      
      INFO: OCR integrity check failed
      
      INFO: 
      
      INFO: Checking CRS integrity...
      
      INFO: 
      
      INFO: Clusterware version consistency passed
      
      INFO: 
      
      INFO: CRS integrity check passed
      
      INFO: 
      
      INFO: Checking node application existence...
      
      INFO: 
      
      INFO: Checking existence of VIP node application (required)
      
      INFO: VIP node application is offline on nodes "w2008-112-rac1"
      
      INFO: 
      
      INFO: Checking existence of NETWORK node application (required)
      
      INFO: NETWORK node application check passed
      
      INFO: 
      
      INFO: Checking existence of GSD node application (optional)
      
      INFO: GSD node application is offline on nodes "w2008-112-rac2,w2008-112-rac1"
      
      INFO: 
      
      INFO: Checking existence of ONS node application (optional)
      
      INFO: ONS node application is offline on nodes "w2008-112-rac1"
      
      INFO: 
      
      INFO: 
      
      INFO: Checking Single Client Access Name (SCAN)...
      
      INFO: 
      
      INFO: Checking TCP connectivity to SCAN Listeners...
      
      INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
      
      INFO: 
      
      INFO: Verification of SCAN VIP and Listener setup passed
      
      INFO: 
      
      INFO: Checking OLR integrity...
      
      INFO: 
      
      INFO: Checking OLR config file...
      
      INFO: 
      
      INFO: OLR config file check successful
      
      INFO: 
      
      INFO: 
      
      INFO: Checking OLR file attributes...
      
      INFO: 
      
      INFO: OLR file check successful
      
      INFO: 
      
      INFO: 
      
      INFO: WARNING: 
      
      INFO: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
      
      INFO: 
      
      INFO: OLR integrity check passed
      
      INFO: OCR detected on ASM. Running ACFS Integrity checks...
      
      INFO: 
      
      INFO: Starting check to see if ASM is running on all cluster nodes...
      
      INFO: 
      
      INFO: Starting Disk Groups check to see if at least one Disk Group configured...
      
      INFO: Disk Group Check passed. At least one Disk Group configured
      
      INFO: 
      
      INFO: Task ACFS Integrity check failed
      
      INFO: 
      
      INFO: Checking if Clusterware is installed on all nodes...
      
      INFO: Check of Clusterware install passed
      
      INFO: 
      
      INFO: Checking if CTSS Resource is running on all nodes...
      
      INFO: CTSS resource check passed
      
      INFO: 
      
      INFO: 
      
      INFO: Querying CTSS for time offset on all nodes...
      
      INFO: Query of CTSS for time offset passed
      
      INFO: 
      
      INFO: Check CTSS state started...
      
      INFO: CTSS is in Observer state. Switching over to clock synchronization checks using NTP
      
      INFO: 
      
      INFO: 
      
      INFO: Starting Clock synchronization checks using Network Time Protocol(NTP)...
      
      INFO: 
      
      INFO: Checking daemon liveness...
      
      INFO: Liveness check passed for "W32Time"
      
      INFO: Check for NTP daemon or service alive passed on all nodes
      
      INFO: 
      
      INFO: Clock synchronization check using Network Time Protocol(NTP) passed
      
      INFO: 
      
      INFO: 
      
      INFO: Oracle Cluster Time Synchronization Services check passed
      
      INFO: Checking VIP configuration.
      
      INFO: Checking VIP Subnet configuration.
      
      INFO: Check for VIP Subnet configuration passed.
      
      INFO: Checking VIP reachability
      
      INFO: Check for VIP reachability passed.
      
      INFO: 
      
      INFO: Post-check for cluster services setup was unsuccessful on all the nodes. 
      
      WARNING: 
      INFO: 
      INFO: Completed Plugin named: Oracle Cluster Verification Utility
      INFO: Oracle Cluster Verification Utility failed.
      INFO: Oracle Cluster Verification Utility failed.
      INFO: ConfigClient.executeToolsInAggregate action performed
      INFO: Exiting ConfigClient.executeToolsInAggregate method
      INFO: Calling event ConfigToolsExecuted
      INFO: 
       The Runconfig command constructed is C:\app\11.2.0\grid\oui\bin\runConfig.bat ORACLE_HOME=C:\app\11.2.0\grid MODE=perform ACTION=configure RERUN=false $*
      INFO: Created a new file C:\app\11.2.0\grid\cfgtoollogs\configToolFailedCommands
      INFO: Since the option is to overwrite the existing C:\app\11.2.0\grid\cfgtoollogs\configToolFailedCommands file, backing it up
      INFO: The backed up file name is C:\app\11.2.0\grid\cfgtoollogs\configToolFailedCommands.bak
      WARNING: readme.txt file doesn't exits
      INFO: ConfigClient.saveSession method called
      INFO: Calling event ConfigSessionEnding
      INFO: ConfigClient.endSession method called
      INFO: Completed Configuration
      INFO: Shutting down OUISetupDriver.JobExecutorThread
      INFO: Cleaning up, please wait...
      INFO: Dispose the install area control object
      INFO: Update the state machine to STATE_CLEAN
      INFO: Setup completed with overall status as Failed
      INFO: All forked task are completed at state setup
      INFO: Completed background operations
      INFO: Moved to state <setup>
      INFO: Adding ExitStatus SUCCESS_WITH_WARNINGS to the exit status set
      INFO: Finding the most appropriate exit status for the current application
      INFO: Exit Status is 6
      INFO: List of warnings encountered in this Application:
      INFO: PREREQS_FAILED_WITH_WARNING
      INFO: Shutdown Oracle Grid Infrastructure
        • 1. Re: Oracle Grid infrastructure installation issue
          Sebastian Solbach -Dba Community-Oracle
          Hi,

          as you can see in the logs, the ASM instance on node 2 is not running -> and if ASM is not running, then your whole cluster stack is not really working.
          What version did you use? 11.2.0.1 or 11.2.0.3?

          However this does not tell you if the error has occurred during installation, or something had crashed later (before cluvfy).
          It still might be that everything was o.k.

          Have you tried restarting Node2? If you are lucky it might come up and everything is working.

          If not, you may want to try to reeinitialize the seconde node.
          Under Linux you could do a rootcrs.pl -deconfig followed by a rootcrs.pl. Not sure on windows though.

          Regards
          Sebastian
          • 2. Re: Oracle Grid infrastructure installation issue
            user10858330
            as you can see in the logs, the ASM instance on node 2 is not running -> and if ASM is not running, then your whole cluster stack is not really working.
            What version did you use? 11.2.0.1 or 11.2.0.3?
            Thanks for Reply. sorry i forget to mention i'm using 11.2.0.3 on windows server 2008 on virtual box. well, i'm not able to connect with ASM on NODE B becoz there is some problem with listener. i don't think so ASM is not running on node B. how can i check asm is running on NODE B or not. i have tried to connect asmcmd but it fail to start on both node with perl error. although i have set gird home and asm sid before issue the ASMCMD command. i have run ASMCA from node A it show both asm instance are up? ASM1 and ASM2 Up?
            However this does not tell you if the error has occurred during installation, or something had crashed later (before cluvfy).
            It still might be that everything was o.k. 
            yes it might be, so you recommend to re install from scratch. i'm following tim hall article to install RAC on windows with help of virtual box. i got hundreds of error and it took me 15 day to reach at this point. i was quite sure i would install it successfully but it again trap me here.
            Have you tried restarting Node2? If you are lucky it might come up and everything is working.
            yes i have restart node 2 several time but in vain
            • 3. Re: Oracle Grid infrastructure installation issue
              Sebastian Solbach -Dba Community-Oracle
              Hi,

              => It's not a problem of the listener. ASM will be running before the listener is running.
              Please check the ASM alert.log what is wrong:

              adrci
              show alert

              => Reinstallation. There is no need to reinstall everything, since node A is running fine.

              With rootcrs.pl (or the command file from windows under ORALCE_HOME/crs/install probably rootcrs.cmd) you should be able to just rerun the configuration.

              Simply call the tool under CMD and you will see a help.

              Regards
              Sebastian
              • 4. Re: Oracle Grid infrastructure installation issue
                user10858330
                I found in install directory file name rootcrs.pl (how to run it ?). i have restarted the Node after posting. i have pasted here the ASM log file extract here. i have found error here in ASM have a look
                WARNING: failed to online diskgroup resource ora.DATA.dg (unable to communicate with CRSD/OHASD)
                NOTE: Attempting voting file relocation on diskgroup DATA
                Fri Jul 06 17:04:00 2012
                NOTE: Volume support  enabled
                Starting up:
                Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
                With the Real Application Clusters and Automatic Storage Management options.
                Windows NT Version V6.0 Service Pack 1 
                CPU                 : 2 - type 8664, 2 Physical Cores
                Process Affinity    : 0x0x0000000000000000
                Memory (Avail/Total): Ph:2084M/3070M, Ph+PgF:5289M/6347M 
                Using parameter settings in server-side spfile +DATA/w2008-scan/asmparameterfile/registry.253.787577543
                System parameters with non-default values:
                  large_pool_size          = 12M
                  instance_type            = "asm"
                  remote_login_passwordfile= "EXCLUSIVE"
                  asm_power_limit          = 1
                  diagnostic_dest          = "C:\APP\ADMINISTRATOR"
                Cluster communication is configured to use the following interface(s) for this instance
                  192.168.1.152
                cluster interconnect IPC version:Oracle 11 Winsock2 TCP/IP IPC
                IPC Vendor 1 proto 1
                  Version 2.0
                Fri Jul 06 17:04:36 2012
                PMON started with pid=2, OS id=740 
                Fri Jul 06 17:04:37 2012
                PSP0 started with pid=3, OS id=1656 
                Fri Jul 06 17:04:39 2012
                VKTM started with pid=4, OS id=2596 at elevated priority
                VKTM running at (10)millisec precision with DBRM quantum (100)ms
                Fri Jul 06 17:04:39 2012
                GEN0 started with pid=5, OS id=1584 
                Fri Jul 06 17:04:39 2012
                DIAG started with pid=6, OS id=2152 
                Fri Jul 06 17:04:39 2012
                PING started with pid=7, OS id=2552 
                Fri Jul 06 17:04:40 2012
                DIA0 started with pid=8, OS id=1636 
                Fri Jul 06 17:04:40 2012
                LMON started with pid=9, OS id=2560 
                Fri Jul 06 17:04:41 2012
                LMD0 started with pid=10, OS id=2628 
                Fri Jul 06 17:04:42 2012
                LMS0 started with pid=11, OS id=2728 at elevated priority
                * CPU Monitor used for high load check 
                * New Low - High Load Threshold Range = [60 - 80] 
                Fri Jul 06 17:04:42 2012
                LMHB started with pid=12, OS id=2168 
                Fri Jul 06 17:04:42 2012
                MMAN started with pid=13, OS id=2844 
                Fri Jul 06 17:04:43 2012
                DBW0 started with pid=14, OS id=2860 
                Fri Jul 06 17:04:44 2012
                LGWR started with pid=15, OS id=968 
                Fri Jul 06 17:04:44 2012
                CKPT started with pid=16, OS id=2932 
                Fri Jul 06 17:04:44 2012
                SMON started with pid=17, OS id=884 
                Fri Jul 06 17:04:44 2012
                RBAL started with pid=18, OS id=2824 
                Fri Jul 06 17:04:45 2012
                GMON started with pid=19, OS id=1900 
                Fri Jul 06 17:04:45 2012
                MMON started with pid=20, OS id=3020 
                Fri Jul 06 17:04:45 2012
                MMNL started with pid=21, OS id=2864 
                lmon registered with NM - instance number 2 (internal mem no 1)
                Fri Jul 06 17:04:52 2012
                Reconfiguration started (old inc 0, new inc 8)
                ASM instance 
                List of instances:
                 1 2 (myinst: 2) 
                 Global Resource Directory frozen
                * allocate domain 0, invalid = TRUE 
                 Communication channels reestablished
                Fri Jul 06 17:04:55 2012
                 * domain 0 valid = 1 according to instance 1 
                * allocate domain 1, invalid = TRUE 
                 * domain 1 valid = 1 according to instance 1 
                 Master broadcasted resource hash value bitmaps
                 Non-local Process blocks cleaned out
                Fri Jul 06 17:04:56 2012
                 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
                 Set master node info 
                 Submitted all remote-enqueue requests
                 Dwn-cvts replayed, VALBLKs dubious
                 All grantable enqueues granted
                 Submitted all GCS remote-cache requests
                 Fix write in gcs resources
                Reconfiguration complete
                Fri Jul 06 17:04:57 2012
                LCK0 started with pid=23, OS id=1660 
                Fri Jul 06 17:05:04 2012
                ORACLE_BASE not set in environment. It is recommended
                that ORACLE_BASE be set in the environment
                Fri Jul 06 17:05:10 2012
                SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:2} */ 
                NOTE: Diskgroup used for Voting files is:
                       DATA
                Diskgroup with spfile:DATA
                Diskgroup used for OCR is:DATA
                NOTE: cache registered group DATA number=1 incarn=0x3250b204
                NOTE: cache began mount (not first) of group DATA number=1 incarn=0x3250b204
                NOTE: Assigning number (1,0) to disk (\\.\ORCLDISK1)
                NOTE: Assigning number (1,1) to disk (\\.\ORCLDISK2)
                NOTE: Assigning number (1,2) to disk (\\.\ORCLDISK3)
                NOTE: Assigning number (1,3) to disk (\\.\ORCLDISK4)
                NOTE: Assigning number (1,4) to disk (\\.\ORCLDISK5)
                Fri Jul 06 17:05:20 2012
                GMON querying group 1 at 2 for pid 24, osid 1932
                Fri Jul 06 17:05:20 2012
                NOTE: cache opening disk 0 of grp 1: DATA_0000 path:\\.\ORCLDISK1
                NOTE: F1X0 found on disk 0 au 2 fcn 0.0
                NOTE: cache opening disk 1 of grp 1: DATA_0001 path:\\.\ORCLDISK2
                NOTE: cache opening disk 2 of grp 1: DATA_0002 path:\\.\ORCLDISK3
                NOTE: cache opening disk 3 of grp 1: DATA_0003 path:\\.\ORCLDISK4
                NOTE: cache opening disk 4 of grp 1: DATA_0004 path:\\.\ORCLDISK5
                NOTE: cache mounting (not first) external redundancy group 1/0x3250B204 (DATA)
                Fri Jul 06 17:05:20 2012
                kjbdomatt send to inst 1
                Fri Jul 06 17:05:21 2012
                NOTE: attached to recovery domain 1
                NOTE: redo buffer size is 256 blocks (1053184 bytes)
                Fri Jul 06 17:05:21 2012
                NOTE: LGWR attempting to mount thread 1 for diskgroup 1 (DATA)
                NOTE: LGWR found thread 1 closed at ABA 7.71
                NOTE: LGWR mounted thread 1 for diskgroup 1 (DATA)
                NOTE: LGWR opening thread 1 at fcn 0.762 ABA 8.72
                NOTE: cache mounting group 1/0x3250B204 (DATA) succeeded
                NOTE: cache ending mount (success) of group DATA number=1 incarn=0x3250b204
                GMON querying group 1 at 3 for pid 18, osid 2824
                Fri Jul 06 17:05:23 2012
                NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 1
                SUCCESS: diskgroup DATA was mounted
                SUCCESS: ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:2} */
                NOTE: Attempting voting file refresh on diskgroup DATA
                NOTE: Voting file relocation is required in diskgroup DATA
                SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */ 
                SUCCESS: ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */
                Fri Jul 06 17:05:25 2012
                WARNING: failed to online diskgroup resource ora.DATA.dg (unable to communicate with CRSD/OHASD)
                NOTE: Attempting voting file relocation on diskgroup DATA
                Fri Jul 06 17:06:39 2012
                Time drift detected. Please check VKTM trace file for more details.
                Fri Jul 06 17:08:10 2012
                NOTE: [crsd.exe 2968:3060] opening OCR file
                Starting background process ASMB
                Fri Jul 06 17:08:14 2012
                ASMB started with pid=26, OS id=3960 
                Fri Jul 06 17:08:21 2012
                NOTE: client +asm2:+ASM registered, osid 3992, mbr 0x0