This discussion is archived
2 Replies Latest reply: Oct 18, 2013 2:02 AM by J RSS

Cluvfy : Time zone consistency check failed

J Newbie
Currently Being Moderated

Hello All,

 

Env: 112.0.3 on Linux 5.8

Single node RAC

 

I 'am trying to add another node to my cluster.

 

When i run the cluvfy it shows

 

cluvfy stage -pre nodeadd -n a0002 -fixup -verbose
:
:
o/p trimmed
:
Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency
Result: Time zone consistency check failed

 

 

The `date` on both the servers shows UTC as the timezone and they sync

 

 

a0001:oracle(ps1) ~ % ssh a0002 date;date
Thu Oct  3 06:28:13 UTC 2013
Thu Oct  3 06:28:13 UTC 2013

 

cat /u01/app/11.2.0/grid/crs/install/s_crsconfig_a0001_env.txt
TZ=UTC
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
TNS_ADMIN=
ORACLE_BASE=

 

Where does the script check for this?

 

TIA,

John

  • 1. Re: Cluvfy : Time zone consistency check failed
    Anar Godjaev Expert
    Currently Being Moderated

    HI,

     

    Attempt to correct ntpd time with below steps on both nodes. Login using root user and execute on both nodes one by one.

     

     

    [root@hostname2 ~]# service ntpd stop

    [root@hostname2 ~]# ntpdate

    [root@hostname2 ~]# service ntpd start

     

     

    Now, crosscheck the output with below command.

     

     

    [root@ ~]# ssh hostname2 date;ssh hostname1 date

     

     

    If, date is same issue is resolved otherwise take help from OS admin or open a ticket to Oracle Support.  

     

    more information please check : Oracle support Doc ID 1487750.1

  • 2. Re: Cluvfy : Time zone consistency check failed
    J Newbie
    Currently Being Moderated

    Thx Anar for stepping by..

     

     

    rm -rf /tmp/cvutrace
    mkdir /tmp/cvutrace
    export CV_TRACELOC=/tmp/cvutrace
    export SRVM_TRACE=true
    export SRVM_TRACE_LEVEL=1
    <STAGE_AREA>/runcluvfy.sh stage -pre crsinst -n <node1>,<node2> -verbose

     

     

    i enabled tracing on cluvfy and found that on Node2's .bash_profile of oracle, i had some interactive session env settings script which required a user input (like, select 1 for ThisDB, select 2 for ThatDB). The cluvfy script internally waited for such input while logging as oracle user to node2 and eventually failed (kind of timeout). I removed that entry and it did good.

     

    and found that on Node2's .bash_profile of oracle, i had some interactive session env settings script which required a user input (like, select 1 for ThisDB, select 2 for ThatDB). The cluvfy script internally waited for such input while logging as oracle user to node2 and eventually failed (kind of timeout). I removed that entry and it did good.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points