Skip navigation
1 2 3 Previous Next

Dear DBA

41 posts
Dear DBA Frank

19c preupgrade.jar

Posted by Dear DBA Frank Aug 20, 2019

See https://docs.oracle.com/en/database/oracle/oracle-database/19/spuss/using-the-pre-upgrade-information-tool-for-oracle-db…

 

Oracle provides a tool that runs several checks before upgrading your database, be it CDB or non-CDB, to 19c.

preugrade.jar comes with Oracle 19c, but you can also download it separately, so as to put it on several servers where you will run upgrades to 19c in the future.  Here are the contents of the zip file:

Here is an extract from the Oracle documentation:

To run preupgrade.jar, you must set your environment to the old ORACLE HOME:

 

mobo12cR1@oracle::~ # export ORACLE_BASE=/app/mobo12cR1/oracle

mobo12cR1@oracle::~ # export ORACLE_HOME=/app/mobo12cR1/oracle/product/12.1.0

mobo12cR1@oracle::~ # export ORACLE_SID=RMP2FD

mobo12cR1@oracle:RMP2FD:~ # export PATH=.:$ORACLE_HOME/bin:$PATH

 

Then run preupgrade.jar with the java from your old ORACLE_HOME:

mobo12cR1@oracle:RMP2FD:~ # $ORACLE_HOME/jdk/bin/java -jar /app/mobo12cR1/oracle/19c/rdbms/admin/preupgrade.jar TERMINAL TEXT

 

Report generated by Oracle Database Pre-Upgrade Information Tool Version

19.0.0.0.0 Build: 1 on 2019-08-13T11:30:13

 

Upgrade-To version: 19.0.0.0.0

 

=======================================

Status of the database prior to upgrade

=======================================

      Database Name:  RMP2FD

     Container Name:  RMP2FD

       Container ID:  0

            Version:  12.1.0.2.0

     DB Patch Level:  No Patch Bundle applied

         Compatible:  11.2.0.0.0

          Blocksize:  8192

           Platform:  Linux x86 64-bit

      Timezone File:  18

  Database log mode:  ARCHIVELOG

           Readonly:  FALSE

            Edition:  EE

 

  Oracle Component                       Upgrade Action    Current Status

  ----------------                       --------------    --------------

  Oracle Server                          [to be upgraded]  VALID

  Oracle XML Database                    [to be upgraded]  VALID

 

==============

BEFORE UPGRADE

==============

 

  REQUIRED ACTIONS

  ================

  None

 

  RECOMMENDED ACTIONS

  ===================

  1.  Remove initialization parameters that Oracle has obsoleted or removed.

      This action may be done now or when starting the database in upgrade mode

      using the target ORACLE HOME.

 

      Parameter

      ---------

      utl_file_dir

 

      If parameters that are obsolete or removed from the target release are

      present in the pfile/spfile, the Oracle database may not start, or it may

      start with an ORA- error.

 

  2.  Run 12.1.0.2.0 $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile invalid

      objects.  You can view the individual invalid objects with

 

        SET SERVEROUTPUT ON;

        EXECUTE DBMS_PREUP.INVALID_OBJECTS;

 

      2 objects are INVALID.

 

      There should be no INVALID objects in SYS/SYSTEM or user schemas before

      database upgrade.

 

  3.  Perform one of the following:

       1) Expire user accounts that use only the old 10G password version and

      follow the procedure recommended in Oracle Database Upgrade Guide under

      the section entitled, "Checking for Accounts Using Case-Insensitive

      Password Version".

       2) Explicitly set SQLNET.ALLOWED_LOGON_VERSION_SERVER in the 19

      SQLNET.ORA to a non-Exclusive Mode value, such as "11". (This is a short

      term approach and is not recommended because it will retain known

      security risks associated with the 10G password version.)

 

      Your database system has at least one account with only the 10G password

      version (see the PASSWORD_VERSIONS column of DBA_USERS).

 

      Starting with Oracle Database release 12.2.0.1, Exclusive Mode is the new

      default password-based authentication mode. All Exclusive Mode

      login/authentication attempts will fail for preexisting user accounts

      which only have the 10G password version and neither the 11G or 12C

      password version (see DBA_USERS.PASSWORD_VERSIONS.) For more information,

      refer to "Understanding Password Case Sensitivity and Upgrades" in the

      Oracle Database Upgrade Guide.

 

  4.  Remove the SEC_CASE_SENSITIVE_LOGON instance initialization parameter

      setting, to configure your system to use case sensitive password

      authentication by default.

 

      Your database system is configured to enforce case insensitive password

      authentication (the SEC_CASE_SENSITIVE_LOGON instance initialization

      parameter setting is FALSE).

 

      Starting with Oracle Database release 12.2, Exclusive Mode is the default

      password-based authentication mode. Case insensitive password

      authentication is not supported in Exclusive Mode. If your system needs

      to use case insensitive password authentication, Exclusive Mode must be

      switched off prior to the upgrade. See the Network Reference Manual

      chapter about the SQLNET.ORA parameter

      SQLNET.ALLOWED_LOGON_VERSION_SERVER for more details on Exclusive Mode.

 

  5.  (AUTOFIXUP) Gather stale data dictionary statistics prior to database

      upgrade in off-peak time using:

 

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

 

      Dictionary statistics do not exist or are stale (not up-to-date).

 

      Dictionary statistics help the Oracle optimizer find efficient SQL

      execution plans and are essential for proper upgrade timing. Oracle

      recommends gathering dictionary statistics in the last 24 hours before

      database upgrade.

 

      For information on managing optimizer statistics, refer to the 12.1.0.2

      Oracle Database SQL Tuning Guide.

 

  6.  (AUTOFIXUP) Directly grant ADMINISTER DATABASE TRIGGER privilege to the

      owner of the trigger or drop and re-create the trigger with a user that

      was granted directly with such. You can list those triggers using: SELECT

      OWNER, TRIGGER_NAME FROM DBA_TRIGGERS WHERE

      TRIM(BASE_OBJECT_TYPE)='DATABASE' AND OWNER NOT IN (SELECT GRANTEE FROM

      DBA_SYS_PRIVS WHERE PRIVILEGE='ADMINISTER DATABASE TRIGGER').

 

      There is one or more database triggers whose owner does not have the

      right privilege on the database.

 

      The creation of database triggers must be done by users granted with

      ADMINISTER DATABASE TRIGGER privilege. Privilege must have been granted

      directly.

 

  INFORMATION ONLY

  ================

  7.  Consider removing the following deprecated initialization parameters.

 

      Parameter

      ---------

      sec_case_sensitive_logon

 

      These deprecated parameters probably will be obsolete in a future release.

 

  8.  Ensure there is additional disk space in LOG_ARCHIVE_DEST_1 for at least

      2181 MB of archived logs.  Check alert log during the upgrade that there

      is no write error to the destination due to lack of disk space.

 

      Archiving cannot proceed if the archive log destination is full during

      upgrade.

 

      Archive Log Destination:

       Parameter    :  LOG_ARCHIVE_DEST_1

       Destination  :  /data/lxsora04/ora10/RMP2FD/arch

 

      The database has archiving enabled.  The upgrade process will need free

      disk space in the archive log destination(s) to generate archived logs to.

 

  9.  Check the Oracle Backup and Recovery User's Guide for information on how

      to manage an RMAN recovery catalog schema.

 

      If you are using a version of the recovery catalog schema that is older

      than that required by the RMAN client version, then you must upgrade the

      catalog schema.

 

      It is good practice to have the catalog schema the same or higher version

      than the RMAN client version you are using.

 

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database RMP2FD

  which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

 

    SQL>@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

=============

AFTER UPGRADE

=============

 

  REQUIRED ACTIONS

  ================

  None

 

  RECOMMENDED ACTIONS

  ===================

  10. (AUTOFIXUP) If you use the -T option for the database upgrade, then run

      $ORACLE_HOME/rdbms/admin/utluptabdata.sql after the upgrade is complete,

      to VALIDATE and UPGRADE any user tables affected by changes to

      Oracle-Maintained types.

 

      There are user tables dependent on Oracle-Maintained object types.

 

      If the -T option is used to set user tablespaces to READ ONLY during the

      upgrade, user tables in those tablespaces, that are dependent on

      Oracle-Maintained types, will not be automatically upgraded. If a type is

      evolved during the upgrade, any dependent tables need to be re-validated

      and upgraded to the latest type version AFTER the database upgrade

      completes.

 

  11. Upgrade the database time zone file using the DBMS_DST package.

 

      The database is using time zone file version 18 and the target 19 release

      ships with time zone file version 32.

 

      Oracle recommends upgrading to the desired (latest) version of the time

      zone file.  For more information, refer to "Upgrading the Time Zone File

      and Timestamp with Time Zone Data" in the 19 Oracle Database

      Globalization Support Guide.

 

  12. To identify directory objects with symbolic links in the path name, run

      $ORACLE_HOME/rdbms/admin/utldirsymlink.sql AS SYSDBA after upgrade.

      Recreate any directory objects listed, using path names that contain no

      symbolic links.

 

      Some directory object path names may currently contain symbolic links.

 

      Starting in Release 18c, symbolic links are not allowed in directory

      object path names used with BFILE data types, the UTL_FILE package, or

      external tables.

 

  13. (AUTOFIXUP) Gather dictionary statistics after the upgrade using the

      command:

 

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

 

      Oracle recommends gathering dictionary statistics after upgrade.

 

      Dictionary statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans. After a database

      upgrade, statistics need to be re-gathered as there can now be tables

      that have significantly changed during the upgrade or new tables that do

      not have statistics gathered yet.

 

  14. Gather statistics on fixed objects after the upgrade and when there is a

      representative workload on the system using the command:

 

        EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

 

      This recommendation is given for all preupgrade runs.

 

      Fixed object statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans.  Those

      statistics are specific to the Oracle Database release that generates

      them, and can be stale upon database upgrade.

 

      For information on managing optimizer statistics, refer to the 12.1.0.2

      Oracle Database SQL Tuning Guide.

 

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database RMP2FD

  which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

 

    SQL>@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

 

==================

PREUPGRADE SUMMARY

==================

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade.log

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

Execute fixup scripts as indicated below:

 

Before upgrade:

 

Log into the database and execute the preupgrade fixups

@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

After the upgrade:

 

Log into the database and execute the postupgrade fixups

@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

Preupgrade complete: 2019-08-13T11:30:14

 

For information, preupgrade.jar generates a log in the DIR parameter of its command line or in Oracle-base/cfgtoollogs/dbunique_name/preupgrade if $ORACLE_BASE has been defined and in Oracle-home/cfgtoollogs/dbunique_name/preupgrade if no $ORACLE_BASE has been defined. The log is /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade.log

 

Then run the preupgrade_fixups.sql script that preupgrade.jar generated (if you are upgrading a CDB, several preupgrade_fixups scripts will have been generated, each bearing a PDB name):

mobo12cR1@oracle:RMP2FD:~ #  sqlplus '/as sysdba'

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 13 11:32:35 2019

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

 

SQL> @/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

Executing Oracle PRE-Upgrade Fixup Script

 

Auto-Generated by:       Oracle Preupgrade Script

                         Version: 19.0.0.0.0 Build: 1

Generated on:            2019-08-13 11:30:10

 

For Source Database:     RMP2FD

Source Database Version: 12.1.0.2.0

For Upgrade to Version:  19.0.0.0.0

 

Preup                             Preupgrade

Action                            Issue Is

Number  Preupgrade Check Name     Remedied    Further DBA Action

------  ------------------------  ----------  --------------------------------

    1.  parameter_obsolete        NO          Manual fixup recommended.

    2.  invalid_objects_exist     NO          Manual fixup recommended.

    3.  exclusive_mode_auth       NO          Manual fixup recommended.

    4.  case_insensitive_auth     NO          Manual fixup recommended.

    5.  dictionary_stats          YES         None.

    6.  trgowner_no_admndbtrg     YES         None.

    7.  parameter_deprecated      NO          Informational only.

                                              Further action is optional.

    8.  min_archive_dest_size     NO          Informational only.

                                              Further action is optional.

    9.  rman_recovery_version     NO          Informational only.

                                              Further action is optional.

 

The fixup scripts have been run and resolved what they can. However,

there are still issues originally identified by the preupgrade that

have not been remedied and are still present in the database.

Depending on the severity of the specific issue, and the nature of

the issue itself, that could mean that your database is not ready

for upgrade.  To resolve the outstanding issues, start by reviewing

the preupgrade_fixups.sql and searching it for the name of

the failed CHECK NAME or Preupgrade Action Number listed above.

There you will find the original corresponding diagnostic message

from the preupgrade which explains in more detail what still needs

to be done.

 

PL/SQL procedure successfully completed.

 

Oracle is kind enough to tell us which issues have been solved and which have not. For the latter, we must go to preupgrade_fixups.sql and look up the appropriate paragraph. Isn't it nice that we get an explanation on what to do with those outstanding issues? 

 

mobo12cR1@oracle:RMP2FD:/app/exploit/shell # view +/parameter_obsolete /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

 

So the first issue is the obsolete utl_file_dir parameter. Let's get rid of it.

SQL>  alter system reset utl_file_dir;

System altered.

 

The second issue is the invalid_objects_exist one.  In my case it was a couple of objects in a user schema that are not used. Still I ran utlrp.sql:

SQL> @?/rdbms/admin/utlrp.sql

then had no more invalid objects.

 

The third issue is exclusive_mode_auth. Let's see what preupgrade_fixups.sql says about that:

mobo12cR1@oracle:RMP2FD:/app/exploit/shell # view +/exclusive_mode_auth /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

To identify those accounts, use the procudere labelled ""checking for accounts using case-insensitive password version" in https://docs.oracle.com/cd/F19136_01/upgrd/database-upgrade-guide.pdf

which says that you must make a note of those accounts, make  a change in sqnet.ora, run the upgrade to 19c then expire the password of those accounts.

SELECT  username,password_versions

FROM dba_users

where ( password_versions = '10G ' OR password_versions = '10G HTTP ' ) and username !='ANONYMOUS';

So I make a note that I'll have to expire the password of that user after the upgrade (ALTER USER username PASSWORD EXPIRE;).

Right now I must add a line in my sqlnet.ora:

 

mobo12cR1@oracle:RMP2FD:/app/mobo12cR1/oracle/product/12.1.0/network/admin # echo "SQLNET.ALLOWED_LOGON_VERSION_SERVER=10" >> sqlnet.ora ; tail sqlnet.ora

DIAG_ADR_ENABLED=on

ADR_BASE=/app/mobo12cR1/oracle/diag/clients

SQLNET.ALLOWED_LOGON_VERSION_SERVER=10

 

The next one is case_insensitive_auth

view +/case_insensitive_auth /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

Since 12.2, case-sensitive passwords are compulsory.  So I must reset my sec_case_sensitive_logon=FALSE parameter

The dictionary_stats and trgowner_no_admndbtrg issues got fixed by this wonderful preupgrade_fixups.sql and the remaining issues are optional. So we can move on to the upagrde to 19c itself (the subject of another blog post maybe).

As Tim Hall would say: hope this helps!

Let's clone a 12.1.0.2 database.

 

Copy the init file of the source database to the target machine.  Then edit that new init file so as to adapt the names and paths to the target machine.  Create any missing path.

vi initRMP2FD.ora

 

On the target machine, add a static entry for your future clone database

vi listener.ora

then reload the listener

lsnrctl reload

In the tnsnames.ora of the target machine, add an alias for the target database

vi tnsnames.ora

Test that the new alias works

tnsping RMP2FD

 

SQL> startup nomount pfile=initRMP2FD.ora

ORACLE instance started.

 

05-07 15:50 RMP2FD SYS AS SYSDBA> alter system register;

System altered.

 

On the target machine,create a password file for your newdatabase

orapwd file=orapwRMP2FD entries=5 sysbackup=y

 

On the target machine, add a new entry in oratab for your target database

vi /etc/oratab

 

Add some parallelism on the RMAN TARGET (that is, the source database):

rman TARGET sys@RMP2 AUXILIARY sys@RMP2FD

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Jul 5 16:40:07 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

target database Password:

connected to target database: RMP2 (DBID=2787549961)

auxiliary database Password:

connected to auxiliary database: RMP2FD (not mounted)

 

RMAN> configure device type disk parallelism 4 backup type to backupset ;

using target database control file instead of recovery catalog

old RMAN configuration parameters:

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters are successfully stored

 

 

On the source machine, add an entry for your target database in tnsnames.ora:

 

RMP2FD =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = targethost)(PORT = 1568))

    )

    (CONNECT_DATA =

      (SID = RMP2FD)

    )

  )

 

Then, on the target machine, make sure your target instance is started with a pfile and the database is not mounted:

SQL> startup nomount pfile='initRMP2FD.ora';

 

Then, on the target machine, launch RMAN connecting to both the source (TARGET) database and the target (AUXILIARY) database:

rman TARGET sys@RMP2 AUXILIARY sys@RMP2FD

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Jul 10 12:43:08 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

target database Password:

connected to target database: RMP2 (DBID=2787549961)

auxiliary database Password:

connected to auxiliary database: RMP2FD (not mounted)

 

Then, run your RMAN "DUPLICATE DATABASE FROM ACTIVE USING COMPRESSED BACKUPSET" command:

DUPLICATE DATABASE TO RMP2FD FROM ACTIVE DATABASE USING COMPRESSED BACKUPSET SPFILE

    parameter_value_convert ('RMP2','RMP2FD')

    set db_file_name_convert='/data/xp32/ora1/RMP2/','/data/xs04/ora1/RMP2FD/','/data/lxpa0032/ora2/RMP2/','/data/xs04/ora2/RMP2FD/','/data/lxpa0032/ora3/RMP2/','/data/xs04/ora3/RMP2FD/'

    set log_file_name_convert='/data/lxpa0032/orared01/RMP2/','/data/xs04/orared01/RMP2FD','/data/lxpa0032/orared02/RMP2/','/data/xs04/orared02/RMP2FD'

    set audit_file_dest='/app/xs04/oracle/adm/RMP2FD/audit'

    set core_dump_dest='/app/xs04/oracle/diag/rdbms/unknown/RMP2FD/cdump'

    set control_files='/data/xs04/ora1/RMP2FD/ctl/control01.ctl','/data/xs04/ora1/RMP2FD/ctl/control02.ctl','/data/xs04/ora1/RMP2FD/ctl/control03.ctl'

    set db_name='RMP2FD'

    set diagnostic_dest='/app/xs04/oracle/'

    set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xs04)(PORT=36080))'

    set log_archive_dest_1='location=/data/xs04/ora10/RMP2FD/arch'

    set log_archive_dest_2='location=/data/xs04/ora11/RMP2FD/arch'

    set utl_file_dir='/app/xs04/oracle/adm/RMP2FD/log'

      NOFILENAMECHECK;

Make sure your various "set" commands convert all the paths that must be converted (have a look at all the paths mentioned in your source database init file).

 

Then, watch RMAN do its magic:

Starting Duplicate Db at 10-JUL-19

using target database control file instead of recovery catalog

allocated channel: ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: SID=12 device type=DISK

current log archived

contents of Memory Script:

{

   restore clone from service  'RMP2' using compressed backupset

   spfile to  '/app/lxsora04/oracle/product/12.1.0/dbs/spfileRMP2FD.ora';

   sql clone "alter system set spfile= ''/app/lxsora04/oracle/product/12.1.0/dbs/spfileRMP2FD.ora''";

}

executing Memory Script

Starting restore at 10-JUL-19

using channel ORA_AUX_DISK_1

 

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using compressed network backup set from service RMP2

channel ORA_AUX_DISK_1: restoring SPFILE

...

 

I got one error only:

Executing: alter database enable block change tracking using file '/data/xp32/dump1/oracle/RMP2/save/RMAN/trackRMP2'

ORACLE error from auxiliary database: ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '/data/xp32/dump1/oracle/RMP2/save/RMAN/trackRMP2'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory

because I had not entered a "set db_file_name_convert" for the path of the block change tracking file.  Fortunately, RMAN ignored that error and continued:

Ignoring error, reattempt command after duplicate finishes

contents of Memory Script:

{

   Alter clone database open resetlogs;

}

executing Memory Script

database opened

Finished Duplicate Db at 10-JUL-19

 

My target database is now a clone of my source database:

I was asked to give my opinion on someone else's analysis of a performance run.   Here's the story.

A benchmark environment was set up at one-fourth of the production environment, to run a stress test.  The stress test ran from 2:57PM to 4:35PM on May 16.  Just for the heck of it, here are some numbers about that run:

The performance team report of that run starts with a bunch of figures about the machine resources, the transaction rate, the response times per web service etc.  Then appears the "Recommendations for the database" part of the report.  Here is how it starts:

On the screenshot above, you will have recognized the "Top Activity" part of OEM.  The grey vertical bar starts at 3:12PM so focuses on the 1st half of our stress test.  Yes yes, the small mostly green mountain on the right-hand side.  For the curious among you, the big mountain from about 4AM to 6AM was the loading of the test data.

1st comment:

During the whole duration of the stress test (with 50 users), the AAS (Average Active Sessions) never goes above 0.6 and the datatabase activity is mostly CPU (the green color).  These 2 facts tell me that whatever the users were doing did not require much resources and those users must not have suffered any waiting (at least not from the database). I asked and was told that no, there was no complaint about peformance during the stress test.

 

The first sentence of the analysis, which you can see at the top of the screenshot, says in French "3 SQL statements use lots of resources (presence of Full Table Scans)".  The said 3 statements must be the 3 we can see at the bottom left-hand corner of the screenshot.  The analysis goes on to make recommendations on each of those 3.

 

2nd comment:

Given that the whole stress test did not use much resources, how could 3 SQL statements use "lots of resources"?  Unless what is meant is "most of the resources used by the stress test".  According to my first comment, the stress test gave satisfactory results in terms of performance.  So why even bother looking at what used most of the little CPU that was used?

 

3rd comment:

The analysis says those 3 statements used "a lot of resources" but the screenshot shows they used CPU only (they're all green).  OK, so they did use some CPU -- but that's what we like our databases to do right (that's what we pay Oracle licenses per CPU for)?

 

4th comment:

The analysis goes on to say that the statements used lots of resources because there were full table scans. OK, we've already established that those statements did not use lots of resources.  But the impact of a full table scan will only be according to the size of the tables.  If the tables that were fully scanned only weigh a few MBs, surely the impact of the FTS can be sneezed at.  Maybe that analysis meant that there were unnecessary table scans -- because FTS can be the best option, especially in the case of small tables.

 

Then there is a recommendation for the first of those 3 SQL statements:

The recommendation ("préconisation" in French) is to create a performance index on the 3 columns of table COTISATION that are mentionned in the FROM and WHERE clauses.

 

5th comment:

Apart from the fact that no evidence of the full table scan is shown, no justification for that 3-column index is given.  Again, the full table scan could be justified.  Maybe an index on CD_ORIGINE,ID_VEHICULE (with the 1st column compressed possibly) would suffice.  Maybe those 2 tables could be a clustered table since they share that ID_GARANTIE.

 

There ensues a recommandation for the 2nd SELECT:

This time, the analysis says only "investigate the creation of a performance index to avoid the full table scan on table PARAM_FRANCHISE_VALEUR".

 

6th comment:

Again, no justification.  With no comparison between an FTS and an index path, one cannot claim that an index is required.

 

Lastly, a recommandation about the 3rd SQL statement:

 

This time we're given a piece of the explain plan. The analys suggests the creation of 2 indexes, one on the PARAM_LIBELLE table and another one on the PARAM_REP_AUTO table, in the hope of doing away with 2 FTSs. 

 

7th comment:

Apart from the fact that again there is no justification for the bad FTS, it seems that there is only one FTS.  Given that the INDEX FAST FULL SCAN is on an index named X1_EDA_PARAM_REP_AUTO_IX5, I have a feeling that index belongs to table PARAM_REP_AUTO, so that table is accessed by rowid, not fully scanned.

 

8th comment:

And lastly, the FTS that did take place has a cost of 15, which is negligeable.

 

I do not know the person who wrote that analysis.  I'm sharing this with you all in the hope of sharing useful information and of learning something new from your feedback!

My conclusions from this exercise are:

  1. don't spend time trying to improve what is working well already (unless there is a need for scalability)
  2. I love green!  If the SQL code produces green in OEM Top Activity, then I'm happy
  3. Oracle performance is complex and many things that are often bad can often be good, so it's best to test and substantiate one's claims with empiric evidence

 

Please add any comment to that analysis you feel would be useful, and please do comment on my comments above!

Recently I discovered the "relink" binary that sits in the bin folder of the grid infrastructure home (RAC) and the Oracle home (database).  I learned that the Oracle binaries in a GI for Linux image are not compiled.  When you install that image, the Oracle Universall Installer links those binaries with the OS's appropriate libraries to compile the binaries.  So when those OS librairies are updated/modified, it can lead to issues with the Oracle binaries.  Hence the Oracle relink tool, which relinks the Oracle binaries to the (possibly new) OS libraries.

See this pages in the Oracle documentation for reference:

https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/relinking-oracle-grid-infrastructure-for-a-cluster-b…

 

This article on Oracle-help explains how to use relink (for different recent versions of Oracle):

http://oracle-help.com/oracle-rac/relink-oracle-12c-grid-infrastructure-and-rdbms-binaries/

Here is what I did to use relink on a 12.1.0.2 GRID_HOME:

 

root@rac01: /tmp # $ORACLE_HOME/crs/install/rootcrs.sh -unlock

Using configuration parameter file: /app/grid/12.1.0/crs/install/crsconfig_params

2019/04/19 10:54:41 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2019/04/19 10:54:49 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2019/04/19 10:54:50 CLSRSC-347: Successfully unlock /app/grid/12.1.0

rac01@gridis::~ #  export PATH=$GRID_HOME/bin:$PATH

rac01@gridis::~ # $GRID_HOME/bin/relink all

writing relink log to: /app/grid/12.1.0/install/relink.log

Then repeat those steps on the second node:

root@rac02: /tmp # cd $ORACLE_HOME/rdbms/install

root@rac02: /app/grid/12.1.0/rdbms/install # ./rootadd_rdbms.sh

root@rac02: /app/grid/12.1.0/rdbms/install # cd $ORACLE_HOME/crs/install

root@rac02: /app/grid/12.1.0/crs/install # ./rootcrs.sh -patch => do this on both nodes.  It will start the cluster.

 

Note the 4 steps:

  1. first all the cluster resources must be stopped on the 1st node where you're going to run relink
  2. then, as root, unlock the Oracle binaries, with rootcrs.sh -unlock
  3. then, as the GI owner, run relink all
  4. then, as root, lock the binaries , with /rootadd_rdbms.sh then rootcrs.sh -patch (in 12cR1, it's different in 12cR2, see above Oracle-help article).  This will start your cluster on the current node.

Then repeat those steps on all the other nodes of your RAC.

Hello fellow DBAs,

I consider my tweets to be just as useful (if not more) as a full blog post. So I'm going to publish compilations of useful tweets, because when you read a bunch of them back to back, you realize there's valuable content in there.  So here are the tweets I consider useful from January 2018 (I was doing a lot of DataGuard back then).  Enjoy!

 

 

 

Lately I have applied PSUs to 12cR1 QA and PROD Grid Infrastructure environnements.  I have run into a number of problems and journaled them.  I then created a Korn shell script that checks for the causes of those various problems.  But also, I have developed the following procedure for applying a PSU (here to a 2-node 12cR1 RAC environment).

 

Let's apply the July 2018 PSU to the node1/node2 12cR1 RAC cluster.

 

    Save GRID_HOME on node1:

First, clean up GRID_HOME (so as to save space and have a smaller backup): root@node1: /u01/GI/12.1.0/.patch_storage # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   57G   28G  68% /app

root@node1: /u01/GI/12.1.0/.patch_storage # rm -rf 25363740_Mar_7_2017_23_34_43 25078431_Jan_25_2017_05_30_05 25363750_Feb_2_2017_23_57_22

root@node1: /u01/GI/12.1.0/.patch_storage # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   53G   32G  64% /app

root@node1: /u01/GI/12.1.0/.patch_storage # nice --8 tar cfz - /u01/GI/12.1.0  > /u01/GI/GRIDHOMEavril2018.tgz    => started at 10H01

root@node1: /home/giowner # ls -rtlh /u01/GI/*HOMEavril2018*gz

-rw-r--r-- 1 root root 6.2G Jan 29 10:22 /u01/GI/GRIDHOMEavril2018.tgz

     

        Carry-out pre-patch checks (with my in-house shell script):

root@node1: /tmp # ksh /home/giowner/vérifspréoupostPSU.ksh

Description de la log (en 1 seul mot) ?

avantPSUJUILL2018

 

Patch level status of Cluster nodes :  818769343                       node2,node1

OPatch Version: 12.2.0.1.8

 

        Save ORACLE_HOME on node1:

root@node1: /u01/GI/12.1.0/.patch_storage # nice --7 tar cfz - /u01/OH/oracle/product/12.1.0 > /u01/GI/ORACLEHOMEavril2018.tgz &

[1] 33164

 

           Save GRID_HOME on node2:

First, do some housekeeping to save space.         

root@node2: /u01/GI # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   55G   30G  65% /app

root@node2: /u01/GI # du -h --max-depth=1 /u01/GI/12.1.0/|sort -h        => great command to sort per size

8.0K    /u01/GI/12.1.0/eons

8.0K    /u01/GI/12.1.0/gnsd

8.0K    /u01/GI/12.1.0/QOpatch

8.0K    /u01/GI/12.1.0/utl

12K     /u01/GI/12.1.0/ctss

12K     /u01/GI/12.1.0/dc_ocm

12K     /u01/GI/12.1.0/gipc

12K     /u01/GI/12.1.0/hs

12K     /u01/GI/12.1.0/mdns

12K     /u01/GI/12.1.0/ohasd

12K     /u01/GI/12.1.0/ologgerd

12K     /u01/GI/12.1.0/osysmond

12K     /u01/GI/12.1.0/scheduler

16K     /u01/GI/12.1.0/diagnostics

16K     /u01/GI/12.1.0/slax

20K     /u01/GI/12.1.0/olap

24K     /u01/GI/12.1.0/addnode

32K     /u01/GI/12.1.0/.opatchauto_storage

36K     /u01/GI/12.1.0/auth

40K     /u01/GI/12.1.0/wlm

68K     /u01/GI/12.1.0/relnotes

116K    /u01/GI/12.1.0/clone

228K    /u01/GI/12.1.0/css

256K    /u01/GI/12.1.0/gpnp

312K    /u01/GI/12.1.0/racg

384K    /u01/GI/12.1.0/precomp

428K    /u01/GI/12.1.0/sqlplus

528K    /u01/GI/12.1.0/dbs

536K    /u01/GI/12.1.0/xag

540K    /u01/GI/12.1.0/xdk

556K    /u01/GI/12.1.0/wwg

732K    /u01/GI/12.1.0/ucp

1.5M    /u01/GI/12.1.0/instantclient

1.6M    /u01/GI/12.1.0/plsql

1.7M    /u01/GI/12.1.0/owm

2.0M    /u01/GI/12.1.0/deinstall

2.6M    /u01/GI/12.1.0/opmn

2.7M    /u01/GI/12.1.0/has

6.4M    /u01/GI/12.1.0/evm

6.4M    /u01/GI/12.1.0/network

11M     /u01/GI/12.1.0/install

12M     /u01/GI/12.1.0/ldap

14M     /u01/GI/12.1.0/demo

15M     /u01/GI/12.1.0/cdata

20M     /u01/GI/12.1.0/sqlpatch

22M     /u01/GI/12.1.0/srvm

29M     /u01/GI/12.1.0/log

32M     /u01/GI/12.1.0/ord

32M     /u01/GI/12.1.0/oui

38M     /u01/GI/12.1.0/jdbc

48M     /u01/GI/12.1.0/dmu

48M     /u01/GI/12.1.0/nls

53M     /u01/GI/12.1.0/oracore

74M     /u01/GI/12.1.0/jlib

74M     /u01/GI/12.1.0/perl

76M     /u01/GI/12.1.0/suptools

77M     /u01/GI/12.1.0/cv

78M     /u01/GI/12.1.0/rest

99M     /u01/GI/12.1.0/crs

104M    /u01/GI/12.1.0/md

132M    /u01/GI/12.1.0/cfgtoollogs

147M    /u01/GI/12.1.0/oc4j

160M    /u01/GI/12.1.0/jdk

201M    /u01/GI/12.1.0/OPatch

208M    /u01/GI/12.1.0/crf

262M    /u01/GI/12.1.0/rdbms

263M    /u01/GI/12.1.0/assistants

336M    /u01/GI/12.1.0/javavm

910M    /u01/GI/12.1.0/tfa

985M    /u01/GI/12.1.0/lib

1.5G    /u01/GI/12.1.0/inventory

2.1G    /u01/GI/12.1.0/bin

3.1G    /u01/GI/12.1.0/usm

12G     /u01/GI/12.1.0/.patch_storage

23G     /u01/GI/12.1.0/

root@node2: /u01/GI # cd /u01/GI/12.1.0/.patch_storage

root@node2: /u01/GI/12.1.0/.patch_storage # ls -lrth

total 128K

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21436941_Aug_13_2015_04_00_40

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 23854735_Sep_29_2016_23_50_00

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 24006101_Oct_1_2016_12_33_50

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 23054246_Jul_5_2016_07_07_59

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 22291127_Apr_6_2016_03_46_21

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21948354_Dec_20_2015_23_39_33

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21359755_Oct_21_2015_02_52_58

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 20831110_Jul_11_2015_00_45_40

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 20299023_Mar_16_2015_22_21_54

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 19769480_Dec_15_2014_06_54_52

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 24007012_Aug_30_2016_00_17_17

drwxr-xr-x  4 giowner oracle 4.0K Sep 10  2017 25363740_Mar_7_2017_23_34_43

drwxr-xr-x  4 giowner oracle 4.0K Sep 10  2017 25363750_Feb_2_2017_23_57_22

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:12 26983807_Nov_8_2017_07_59_12

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:14 27338013_Feb_14_2018_10_26_39

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:15 27338020_Mar_27_2018_11_48_03

drwxr-xr-x 28 giowner oracle 4.0K Oct 15 14:15 NApply

-rw-r--r--  1 giowner oracle  12K Oct 15 14:15 record_inventory.txt

-rw-r--r--  1 giowner oracle  15K Oct 15 14:15 interim_inventory.txt

-rw-r--r--  1 giowner oracle   93 Oct 15 14:16 LatestOPatchSession.properties

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26925311_Dec_6_2017_01_18_05

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 24732082_Dec_21_2016_07_15_01

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26609783_Aug_10_2017_05_36_42

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 25755742_Jun_29_2017_09_56_57

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 25171037_Mar_7_2017_12_37_23

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26713565_Sep_29_2017_06_57_50

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 27338041_Mar_16_2018_02_05_00

root@node2: /u01/GI/12.1.0/.patch_storage # rm -rf 21436941_Aug_13_2015_04_00_40 23854735_Sep_29_2016_23_50_00 24006101_Oct_1_2016_12_33_50 23054246_Jul_5_2016_07_07_59 22291127_Apr_6_2016_03_46_21 21948354_Dec_20_2015_23_39_33 21359755_Oct_21_2015_02_52_58 20831110_Jul_11_2015_00_45_40 20299023_Mar_16_2015_22_21_54 19769480_Dec_15_2014_06_54_52 24007012_Aug_30_2016_00_17_17 25363740_Mar_7_2017_23_34_43 25363750_Feb_2_2017_23_57_22

nice --8 tar cfz - /u01/GI/12.1.0 > /u01/GI/GRIDHOMEavril2018.tgz    => started at 10H21

root@node2: /u01/GI/12.1.0/.patch_storage # ls -lh /u01/GI/GRIDHOMEavril2018.tgz

-rw-r--r-- 1 root root 5.3G Jan 29 10:38 /u01/GI/GRIDHOMEavril2018.tgz

         

            Upgrade OPatch, on all nodes:

        Upgrade Opatch, for all GRID_HOMEs, as root:

scp giowner@otherserver:/u01/OH/oracle/product/12.1.0/DernierOPatch--p6880880_122010_Linux-x86-64.zip /u01/GI/12.1.0/

cd /u01/GI/12.1.0/ ; chown giowner:oracle DernierOPatch--p6880880_122010_Linux-x86-64.zip ; cd $GRID_HOME; ./OPatch/opatch version ; du -sh ./OPatch

cp -R  ./OPatch/ ./OPatch12.2.0.1.8 && du -sh ./OPatch12.2.0.1.8    => save the old OPatch, just in case

rm -Rf ./OPatch/ && unzip DernierOPatch--p6880880_122010_Linux-x86-64.zip &&  ./OPatch/opatch version ; chown -R giowner:oracle ./OPatch  ;  ls -dlah OPatch ; mv DernierOPatch--p6880880_122010_Linux-x86-64.zip /u01/OH/oracle/product/12.1.0/

        Upgrade Opatch, for all ORACLE_HOMEs, as oracle:

cd $ORACLE_HOME;./OPatch/opatch version ; du -sh ./OPatch

cp -R  ./OPatch/ ./OPatch12.2.0.1.8  && du -sh ./OPatch12.2.0.1.8

rm -Rf ./OPatch/ && unzip DernierOPatch--p6880880_122010_Linux-x86-64.zip &&  ./OPatch/opatch version ; ls -dlah OPatch

     

           Save ORACLE_HOME on node2

nice --8 tar cfz - /u01/OH/oracle/product/12.1.0 > /u01/GI/ORACLEHOMEavril2018.tgz    => started at 10H49

     

           Bring the PSU onto node1

        as root:

root@node1: /u01/GI/12.1.0 # mkdir -p /app/distrib/patch/ ; chmod g+w /app/distrib/patch/ ; chown oracle:oracle /app/distrib/patch/ ;mount -o vers=3,nolock 10.20.30.40:/patches/LinuxGI /mnt

        as giowner

cd /mnt/oracle/Linux64/12.1.0.2/ ; unzip PSUJUL2018pourRAC--p27967747_121020_Linux-x86-64.zip -d /app/distrib/patch/

     

            Preparations:

Make sure /tmp has at least 1GB of free space.

    Log some data about the cluster before applying the patch:

as giowner : /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvBEFORE; $GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvBEFORE

     

    Check patch compatibility, as root on node1:

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -analyze -logLevel FINER        => started at 10H56    OPatchAuto successful

     

    Stop any ACFS filesystems, as root:

node1@giowner:+ASM2:~ # /sbin/acfsutil registry|grep "Mount Point"|awk -F ":" '{print "/bin/umount"$2}'

/bin/umount /app/oacfsmp

/bin/umount /data/oacfsmp

root@node1: /u01/GI/12.1.0 #  mount |grep acfs  ;   

 

    Move any cluster resource that runs on this node only (as the whole cluster will be brought down on this node):

root@node1: /u01/GI/12.1.0 # crsctl eval relocate resource dollarU -s node1 -n node2 -f        => the EVAL command just says what will happen if you run this relocate command

root@node1: /u01/GI/12.1.0 # crsctl relocate resource dollarU -s node1 -n node2 -f        => the actual relocate command, which moves the dollarU resource from node1 to node2

 

       Apply the patch on node1, as root

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -logLevel FINE    => started at 11H04    Session log file is /u01/GI/12.1.0/cfgtoollogs/opatchauto/opatchauto2019-01-29_11-04-40AM.log    The id for this session is 6S82

Bringing down CRS service on home /u01/GI/12.1.0

...

It should end with "OPatchauto session completed at Tue Jan 29 11:34:33 2019 Time taken to complete the session 12 minutes, 5 seconds"

 

        Post-patch checks on node1, as giowner

node1@giowner:+ASM1:~ # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "ACFS PATCH SET UPDATE 12.1.0.2.180717 (27762277)"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

Patch description:  "WLM Patch Set Update: 12.1.0.2.180116 (26983807)"

     

        Post-patch checks on node1, as oracle

$ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

node1@oracle:MYRACDB1:/u01/OH/oracle/product/12.1.0/bin # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

     

Log some data about the cluster after applying the patch, as  giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvAFTER;$GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvAFTER;crsctl query crs activeversion -f  >> ~/$(date +%Y%m%d_)lsinvAFTER     

=> Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [818769343].

 

If everything is OK so far, and there has been no downtime, let's continue with the next node:

     

    Move any cluster resource that runs on that second node only (as the whole cluster will be brought down on this node): 

root@node2: /u01/GI/12.1.0 # crsctl eval relocate resource dollarU -s node2 -n node1 -f

root@node2: /u01/GI/12.1.0 # crsctl relocate resource dollarU -s node2 -n node1 -f

 

    Carry-out pre-patch checks (with my in-house shell script):

root@node2: /home/giowner # ksh vérifspréoupostPSU.ksh

Description de la log (en 1 seul mot) ?

avantPSUJUILL2018

 

Log any reported problem and fix it.

     

            Bring the PSU onto node2

        as root:

root@node1: /u01/GI/12.1.0 # mkdir -p /app/distrib/patch/ ; chmod g+w /app/distrib/patch/ ; chown oracle:oracle /app/distrib/patch/ ;mount -o vers=3,nolock 10.20.30.40:/patches/LinuxGI /mnt

       as giowner:

cd /mnt/oracle/Linux64/12.1.0.2/ ; unzip PSUJUL2018pourRAC--p27967747_121020_Linux-x86-64.zip -d /app/distrib/patch/

     

            Preparations on node 2:

Make sure /tmp has at least 1GB of free space.

    Log some data about the cluster before applying the patch:

as giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvBEFORE; $GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvBEFORE

     

    Check patch compatibility (dry run), as root on node1:

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -analyze -logLevel FINER        => started at 10H56    OPatchAuto successful

     

    Stop any ACFS filesystems, as root:

node1@giowner:+ASM2:~ # /sbin/acfsutil registry|grep "Mount Point"|awk -F ":" '{print "/bin/umount"$2}'

/bin/umount /app/oacfsmp

/bin/umount /data/oacfsmp

root@node1: /u01/GI/12.1.0 #  mount |grep acfs  ;   

 

 

                Apply the patch to the 2nd node, as root

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -logLevel FINE        => started at 15H37        Session log file is /u01/GI/12.1.0/cfgtoollogs/opatchauto/opatchauto2019-01-29_03-38-31PM.log    The id for this session is YRQF    OPatchauto session completed at Tue Jan 29 15:51:44 2019    Time taken to complete the session 13 minutes, 38 seconds

     

        Post-patch checks on node2, as giowner

node2@giowner:+ASM2:~ # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "ACFS PATCH SET UPDATE 12.1.0.2.180717 (27762277)"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

Patch description:  "WLM Patch Set Update: 12.1.0.2.180116 (26983807)"

 

       Post-patch checks on node2, as oracle

node2@oracle:MYRACDB2:/u01/OH/oracle/product/12.1.0/bin # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

29-01 16:08 MYRACDB2 SYS AS SYSDBA> select PATCH_ID, DESCRIPTION, ACTION, STATUS from DBA_REGISTRY_SQLPATCH;

  PATCH_ID DESCRIPTION                                               ACTION          STATUS

---------- --------------------------------------------------------- --------------- ---------------

  25171037 DATABASE PATCH SET UPDATE 12.1.0.2.170418                 APPLY           SUCCESS

  27338041 DATABASE PATCH SET UPDATE 12.1.0.2.180417                 APPLY           SUCCESS

  27547329 DATABASE PATCH SET UPDATE 12.1.0.2.180717                 APPLY           SUCCESS 

 

Log some data about the cluster after applying the patch, as  giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvAFTER;$GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvAFTER;crsctl query crs activeversion -f  >> ~/$(date +%Y%m%d_)lsinvAFTER

 

    When all is over, dismount the mount point of the patch:

root@node2: /home/giowner # umount /mnt

root@node1: /tmp # umount /mnt

 

     So I patched both nodes of my clusters, with no downtime, I have a backup of both my ORACLE_HOMEs and GRID_HOMEs and a backup of the previous version of OPatch, and I traced the situation before and after the patch (in the $(date +%Y%m%d_)lsinvBEFORE and $(date +%Y%m%d_)lsinvAFTER text files.  Please tell me what you think in the comments below.

Dear DBA Frank

New Quality Blog

Posted by Dear DBA Frank Nov 30, 2018

A colleague of mine recently started his own blog.  He is an advanced DBA with good all-round knowledge of the Oracle database.  I encouraged him to share some really useful stuff he had shown me, so here goes:

MY DBA WORLD

I am sure many will enjoy his posts (about half-a-dozen so far) as they are clear, thorough and sightly.  Don't hesitate to leave comments on his blog posts!

Oftentimes, I have to run an EXPDP job on a server with no in-house scripts to run EXPDP.  So instead of figuring out how to write my long EXPDP command line, such as this one:

expdp system@mydb directory=DATAPUMP dumpfile=161122MYDBPROD-%U.dmp logfile=161122MYDBPRODexport.log JOB_NAME=MYDBPROD schemas=myprodschema parallel=2 EXCLUDE=STATISTICS METRICS=Y

I just copy/paste a ksh script of mine on the database server, then run it with the appropriate 5 parameters:

expdp5.ksh 12 MYBIGSCHEMA "TABLE:\"LIKE '%$%'\"" MYDB_MYBIGSCHEMA /data/share/exports/exadata/DATAPUMP
expdp5.ksh 32 SHEBANG SHEBANG MYDB_THEWHOLESHEBANG /data/share/exports/exadata/DATAPUMP

 

Here are the contents of the Korn shell script:

#!/bin/ksh

##set -x

# July 10 2018, for a one-off expdp job. Frank Dernoncourt

# be connected as the oracle Unix user

# October 30, 2018 :  added flashback_time=to_timestamp\(localtimestamp\)

 

#### 1st  parameter : degree of  parallelism

#### 2nd parameter : schema or SHEBANG for a FULL export          #### to export several schemas, enter a comma-separated list

#### 3rd parameter : EXCLUSIONS or SHEBANG          #### SHEBANG if you don't want to exclude anything.  For exclusions, just enter what you would enter after EXCLUDE=.  For instance, "STATISTICS" for "EXCLUDE=STATISTICS"

#### 4th parameter : dumpfile without .dmp

#### 5th parameter : path for the dump and the log

 

LOGFILE=expdp--${2}.log

THEDATE=$(date +%Y%m%d_%H%M)

SECONDS=0

echo ""        | tee -a ${LOGFILE}

echo "================== This is the log of $(basename ${0}), which ran at ${THEDATE} =================="        | tee -a ${LOGFILE}

 

if [ -z ${ORACLE_SID} ]

then

echo "ORACLE_SID not defined"

exit 1

fi

 

if [ $# -ne 5 ]

then

echo "Wrong number of parameters."

echo "1 : DOP"

echo "2 : schema to export or SHEBANG for FULL"

echo "3 : exclusions or SHEBANG  Exemple: "TABLE:\"LIKE '%BIN$%'\"" "          #### In this example, tables from the RECYCLING BIN will be excluded

echo "4 : dumpfile name without .dmp"

echo "5 : path for the log and dump"

exit 1

fi

 

if [[ ${1} > 1 ]]

then

echo "You chose a degree of parallelism of ${1}, which will create ${1} dumpfiles."

DOP=${1}

else

DOP="1"

fi

 

export AE=$(echo ${2} | tr '[:lower:]' '[:upper:]')

if [[ ${AE} = "SHEBANG" ]]

then

echo "You chose to run a FULL export."

AEXPORTER="FULL=Y"

else

echo "You chose to export schema ${AE}."

AEXPORTER="SCHEMAS=${AE}"

fi

 

export EXCLU=$(echo ${3} | tr '[:lower:]' '[:upper:]')

if [[ ${EXCLU} = "SHEBANG" ]]

then

echo "You chose not to exclude anything from the export."

EXCLUSIONS=""

else

echo "You chose those exclusions: ${EXCLU}."

EXCLUSIONS="EXCLUDE=${EXCLU}"

fi

 

export FDUMP=${4}

if [[ ${DOP} > 1 ]]

then

DUMPFILE="${FDUMP}%U"

else

DUMPFILE="${FDUMP}"

fi

 

export CHEMIN=${5}

if [ -d ${CHEMIN} ]

then

DIRECTORY="${CHEMIN}"

else

echo "The chosen path does not exist."

exit 1

fi

 

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

create or replace directory DATA_PUMP_UN as '${DIRECTORY}';          #### For this one-off job, I create a DIRECTORY that I drop when it's over

EOF

 

echo "The command line to be run is: expdp system flashback_time=to_timestamp(localtimestamp) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1} "                | tee -a ${LOGFILE}

 

echo "OK to continue Y/N ? "

read CONFIRMATION

if [ ${CONFIRMATION} = N ]

then exit

fi

 

expdp system flashback_time=to_timestamp\(localtimestamp\) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1}  2>>${LOGFILE}          #### Only errors are routed to the main log.  The full EXPDP log will still appear in LOGFILE in the DATA_PUMP_UN directory

 

RET=$?

echo "return code of the export: $RET" | tee -a ${LOGFILE}

 

if [ $RET -ne 0 ]

then

echo "==> UNEXPECTED RESULT" | tee -a ${LOGFILE}

else

echo "==> EXPECTED RESULT" | tee -a ${LOGFILE}

fi

 

##set +x

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

drop directory DATA_PUMP_UN ;          #### Once the job is done, I drop the DIRECTORY.

EOF

 

echo "   duration ${SECONDS} seconds" | tee -a ${LOGFILE}

date "+%n%d/%m/%Y %H:%M:%S%n" | tee -a ${LOGFILE}

echo "View the log that script in ${LOGFILE}"

 

exit $RET

 

The first version of my shell script is in French and here is what it looks like when your run it:

181031expdp5FRENCH.jpg

If anything is unclear, let me know in the comments below.  Enjoy my script!

For an SR, I'm being asked by Oracle support to provide cluster data from 4H before the problem to 4 hours after the problem.

For reference, check SRDC - Data Collection for Grid Infrastructure, Storage Management, and RAC (Doc ID 2041228.1)

Here are the command lines I used and their output.

[myracuser@racnode1 bin]$ pwd

/app/grid/12.1.0/tfa/bin

[myracuser@racnode1 bin]$

[myracuser@racnode1 bin]$ /app/grid/12.1.0/tfa/bin/tfactl diagcollect -from "Jun/1/2018 13:00:00" -to "Jun/1/2018 21:00:00"

Collecting data for all nodes

Scanning files from Jun/1/2018 13:00:00 to Jun/1/2018 21:00:00

 

Collection Id : 20180604104236racnode1

 

Repository Location in racnode1 : /app/myracuser/tfa/repository

 

Collection monitor will wait up to 30 seconds for collections to start

2018/06/04 10:42:40 CEST : Collection Name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

2018/06/04 10:42:40 CEST : Scanning of files for Collection in progress...

2018/06/04 10:42:40 CEST : Collecting extra files...

2018/06/04 10:43:10 CEST : Getting list of files satisfying time range [06/01/2018 13:00:00 CEST, 06/01/2018 21:00:00 CEST]

2018/06/04 10:43:10 CEST : Starting Thread to identify stored files to collect

2018/06/04 10:43:10 CEST : Getting List of Files to Collect

2018/06/04 10:43:10 CEST : Trimming file : racnode1/app/myracuser/crsdata/racnode1/cvu/cvutrc/cvuhelper.log.0 with original file size : 656kB

2018/06/04 10:43:11 CEST : racnode1: Zipped 100 Files so Far

2018/06/04 10:43:16 CEST : racnode1: Zipped 200 Files so Far

2018/06/04 10:43:19 CEST : racnode1: Zipped 300 Files so Far

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_oraagent_myracuser.trc with original file size : 7MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_orarootagent_root.trc with original file size : 882kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_scriptagent_myracuser.trc with original file size : 4.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/evmd.trc with original file size : 2.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/gipcd.trc with original file size : 7.6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ocssd_1.trc with original file size : 52MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/octssd.trc with original file size : 6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd.trc with original file size : 6.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdagent_root.trc with original file size : 854kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdmonitor_root.trc with original file size : 847kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_oraagent_myracuser.trc with original file size : 6.5MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_orarootagent_root_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/var/log/messages-20180603 with original file size : 4.2MB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener/trace/listener.log with original file size : 666kB

2018/06/04 10:43:21 CEST : racnode1: Zipped 400 Files so Far

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan1/trace/listener_scan1.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan2/trace/listener_scan2.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan3/trace/listener_scan3.log with original file size : 669kB

2018/06/04 10:43:31 CEST : Finished Getting List of Files to Collect

2018/06/04 10:43:40 CEST : Collecting ADR incident files...

2018/06/04 10:43:40 CEST : Waiting for collection of extra files

2018/06/04 10:45:10 CEST : Completed collection of extra files...

2018/06/04 10:45:10 CEST : Completed Zipping of all files

2018/06/04 10:45:10 CEST : Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finished Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Finished Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Total Number of Files checked : 3034

2018/06/04 10:45:10 CEST : Total Size of all Files Checked : 492MB

2018/06/04 10:45:10 CEST : Number of files containing required range : 489

2018/06/04 10:45:10 CEST : Total Size of Files containing required range : 135MB

2018/06/04 10:45:10 CEST : Number of files trimmed : 19

2018/06/04 10:45:10 CEST : Total Size of data prior to zip : 27MB

2018/06/04 10:45:10 CEST : Saved 109MB by trimming files

2018/06/04 10:45:10 CEST : Zip file size : 2.3MB

2018/06/04 10:45:10 CEST : Total time taken : 150s

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

Logs are being collected to: /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all

/app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

 

180604outputofTFACollector.jpg

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ ls -lrth

total 2.4M

-rwx------ 1 myracuser oracle 1.6K Jun  4 10:43 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip.txt

-rwx------ 1 myracuser oracle 2.4M Jun  4 10:45 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

-rwx------ 1 myracuser oracle 4.3K Jun  4 10:45 diagcollect_20180604104236_racnode1.log

 

 

The .txt file is 40-line long and is the report of the collection :

TFA Version : 12.1.2.7.0

Build ID : 12127020160304140533

 

Collection ID : 20180604104236racnode1

Zip file name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file location : /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file creation date : Mon Jun 04 2018 10:42:40 CEST

Host name : racnode1

Duration of Diagnostics :

  Start date : Fri Jun 01 2018 13:00:00 CEST

  End date : Fri Jun 01 2018 21:00:00 CEST

Component(s) in zip file : INSTALL  CRS  OS  EMAGENT  DBWLM  OCM  ACFS  RDBMS  TNS  OMS  WLS  EMPLUGINS  EM  ASMPROXY  ASM  CFGTOOLS  CHMOS  SUNDIAG

User initiated

 

Directory Structure in zip file :

racnode1/

racnode1/etc/oracle

racnode1/app/oraInventory/ContentsXML

racnode1/app/myracuser/crsdata/racnode1/output

racnode1/var/log

racnode1/tnslsnr/racnode1/listener_scan2/trace

 

 

The log file is what appeared on standard output above, which finishes with

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

 

Here is a glance at the contents of the zip file:

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ unzip -l racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Archive:  racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

  Length      Date    Time    Name

---------  ---------- -----   ----

    13658  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/lsinv/lsinventory2018-06-01_17-24-19PM.txt

    22345  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/opatch2018-06-01_17-24-19PM_1.log

   383494  06-01-2018 17:13   racnode1/app/grid/12.1.0/install/make.log

     1063  06-01-2018 15:58   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_15-58-40.log

     7402  06-01-2018 17:23   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_17-13-23.log

      329  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/comps.xml

      564  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/inventory.xml

      292  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/libs.xml

     8464  06-01-2018 17:08   racnode1/app/oraInventory/logs/UpdateNodeList2018-06-01_05-07-55-PM.log

     369  06-04-2018 10:42   racnode1/racnode1_QUERYVOTE

      127  06-04-2018 10:42   racnode1/racnode1_VERSIONS

---------                     -------

27114817                     550 files

 

I then uploaded their zip (only the zip file, as requested ) to the SR.  The SR status changed from “Customer Working” to “Review Update”:

La 18c version "sur-site" n'est pas encore disponible (elle l'est dans le nuage), mais le client Oracle 18c est sorti.

Installons ce client sur notre poste Windows 7 Enterprise. Après avoir décompressé l'archive zip, il faut lancer le setup.exe :

ScreenShot002.jpg

Si on choisit personnalisé, après avoir indiqué un utilisateur Windows qui sera propriétaire de la nouvelle ORACLE_HOME, on verra cette liste de composants à choisir :

Mais nous allons sélectionner "Administrateur" et faire "Suivant" :

On choisit "Employer l'utilisateur Windows existant" et on saisit les informations de connexion de notre utilisateur Windows:

Indiquons la ORACLE_BASE :

Ensuite, ça tourne tout seul :

Jusqu'à arriver à ce résumé :

180619client18--1.jpg

 

On peut enregistrer le fichier de réponse, qui pourra servir à une installation silencieuse ultérieure, puis on clique sur "Installer" et  ça tourne tout seul :

Puis il configure, tout seul :

Et ça doit se terminer comme ceci :

 

Si on clique sur le bouton "Aide", ça fait sourire :

 

On a maintenant, dans le chemin indiqué, un nouveau client Oracle :

Et dans le menu Démarrer de Windows, on a ces nouvelles entrées :

ScreenShot001.jpg

 

Et on peut maintenant essayer le nouveau SQL+ :

If you use Data Guard, you probably use its broker.  You may not have noticed, but whenever you make changes to your Broker configuration, those changes are stored in a file on the operating system.  The location of that file is specified in the DG_BROKER_CONFIG_FILE1 initialisation parameter.  DG_BROKER_CONFIG_FILE2 is a backup of DG_BROKER_CONFIG_FILE1, as we are going to see in this post.  Note that, as the Oracle documentation says, these initialisation parameters can be modified only when the broker is not running.

Let's look at what happens to those files when you create/delete/re-create/modify a broker configuration in Oracle 12cR1.

 

Here is the situation at 11H30 on the 29th of May (Database MYDB4 is on server machine2, and database MYDB3 is on server machine1):

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 10:03 /app/racnode1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:22 /app/racnode1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's do away with the current broker configuration:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

ORA-16620 occurs when the broker is not started on either of the databaseses.  in that case, the broker configuration files are not affected.  Let's start the broker on our STANDBY and let's do that again.

DGMGRL> remove configuration

Removed configuration

 

Now let's create a new broker configuration:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Configuration "MYDB3dgconf" created with primary database "MYDB3"

 

Since I am working on machine1, the broker configuration files on machine1 have been affected at 11:35AM:

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle  12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

 

Now let's enable that configuration:

DGMGRL>  enable configuration

Enabled.

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

As you can see, only one of the configuration files was modified (at 11:36AM).  This is because the other one is a backup of the previous state of the configuration (when it was not enabled).

 

At 11:38AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Error: ORA-16525: The Oracle Data Guard broker is not yet available.

Failed.

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

No files were modified.  When you generate an error in DGMGRL, the broker configuration files are not affected.

Here is the explanation in the Data Guard log on racnode1:

ADD DATABASE

      05/29/2018 11:38:54

      DG_BROKER_START is not set to TRUE on member MYDB4. Set the parameter to TRUE and then try to add the member to the configuration.

 

So let's correct that:

29-05 11:43 MYDB4 SYS AS SYSDBA> alter system set dg_broker_start=TRUE;

System altered.

 

Now at 11:45AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Database "MYDB4" added

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:45 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

On machine1, one configuration file was altered, and the other is a backup of the previous state.

But on racnode2, both configuration files were modified (in fact, re-created), since this is the first time this new broker configuration is generating a change on racnode2 (remember, that broker configuration was created at 11:35AM on machine1 and so far had not done anything involving racnode2).

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

 

But the STANDBY broker log reports an error:

05/29/2018 11:44:50

      Site MYDB3 returned ORA-16603.

      MYDB4 version check failed; status = ORA-16603: detected configuration unique ID mismatch

            configuration will be deleted on this database

 

Do you remember that the broker of my STANDBY was not started when I ran CREATE CONFIGURATION on the PRIMARY?  So let's re-create the broker configuration while both brokers are started.

 

At 11:55AM:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

Removed configuration

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

As you can see, when you remove a broker configuration, there is no backup of the removed configuration since both files are reset (their size is 8K, while when there was a configuration, their size was 12K).

At that point, I had to move to another subject, and my colleague finished that broker configuration.  But on the following day (the 30th of May), I had to re-create it.

Here is the situation on the morning of May30:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

At 10:40AM, I removed the broker configuration from racnode1:

DGMGRL>  remove configuration;

Removed configuration

 

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

It seems it first connects to the other server (the files on that one were reset at 10:40, while the files on machine1 were reset 1 minute later).

 

Now let's create a new broker congiration, with MYDB3 as the PRIMARY:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16584: operation cannot be performed on a standby database

Failed.

 

So let's go to the other server (machine2):

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys

Password:

Connected as SYSDG.

 

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16642: DB_UNIQUE_NAME mismatch

Failed.

This error occured at 10:44AM, and it did modify the configuration files:

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

But not on the STANDBY side (machine1):

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's create our configuration but with the right PRIMARY this time:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

As seen previously, when you create a configuration, both files are affected (but notice their difference in size):

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

But on the STANDBY side, nothing happened, as the STANDBY has not been added to the configuration yet:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's enable that configuration:

DGMGRL> enable configuration

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Again, one of the files contains the previous state of the configuration (2mn earlier, before the enablement).

 

Now, let's add a STANDBY to that configuration:

DGMGRL> add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

That error did not affect the configuration files.

 

Before we modify initialisation parameter log_archive_dest_3 (which is the cause of the ORA-16698), we must remove our configuration.  We will re-create it after that modification in our database.

DGMGRL> remove configuration;

Removed configuration

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Of course, all files are reset.

Now let's modify that parameter in both databases:

alter system set log_archive_dest_3='' scope =both ;

 

And re-create our broker configuration:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

One file on machine2 contains our new configuration while the other one is a backup of the previous state (no configuration, hence 8K).  While, on machine1, nothing happened, but you already know that.

 

DGMGRL> enable configuration;

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

One file on machine2 contains our new configuration (the one dated 11:50AM) while the other one is a backup of the previous state.

 

At 11:52AM:

GMGRL>  add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Database "MYDB3" added

 

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:52 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

The files on the STANDBY side have not yet been modifie, but you already know why.

 

At 12:02PM:

DGMGRL> enable database 'MYDB3';

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:03 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

At last, the files on the STANDBY side are modified.

And in my PRIMARY, this ENABLE DATABASE has automatically filled in the log_archive_dest_3 parameter with "service="MYDB3_dgmgrl", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="MYDB3" net_timeout=30, valid_for=(online_logfile,all_roles)".

And in my STANDBY, this ENABLE DATABASE has automatically filled in the fal_server parameter "with MYDB4_dgmgrl" and the log_archive_config parameter has changed from "nodg_config" to "dg_config=(MYDB4)".

 

If I go to machine1 and run SHOW CONFIGURATION, everything is OK:

DGMGRL> show configuration

Configuration - MYDB4dgconf

  Protection Mode: MaxPerformance

  Members:

  MYDB4 - Primary database

    MYDB3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 79 seconds ago)

 

Let's now perform a SWITCHOVER (after first running VALIDATE DATABASE for both our databases, to confirm the SWITCHOVER can go through):

DGMGRL> switchover to 'MYDB3' ;

Performing switchover NOW, please wait...

Operation requires a connection to instance "MYDB3" on database "MYDB3"

Connecting to instance "MYDB3"...

Connected as SYSDBA.

New primary database "MYDB3" is opening...

Operation requires start up of instance "MYDB4" on database "MYDB4"

Starting instance "MYDB4"...

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is "MYDB3"

 

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

machine2@oracle:MYDB4:/app/machine2/oracle/product/12.1.0/network/admin #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

Again, the configuration files on the other server were modified first.  But all 4 files have been modified, and 2 of them (one on either machines) contain the previous state of the broker configuration.

Oracle 12cR1. After getting this ORA-01187 :

11-04 17:55 MY12cDB SYS AS SYSDBA> select * from dba_temp_files;

select * from dba_temp_files

              *

ERROR at line 1:

ORA-01187: cannot read from file  because it failed verification tests

ORA-01110: data file 257: '/data/MY12cDB/ora3/DBL13/ETMPT01_01.dbf'

 

I decided to re-create the TEMPORARY TABLESPACE that this file belonged to, which happened to be the DEFAULT TEMPORARY TABLESPACE, named TEMP, of my 12cR1 database.

 

Here are the steps taken:

CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/data/MY12cDB/ora3/DBL13/TEMP2.dbf' SIZE 10M AUTOEXTEND ON  MAXSIZE 2048M;

alter database default temporary tablespace TEMP2;

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

drop tablespace temp including contents and datafiles;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/DBL13/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

alter database default temporary tablespace TEMP;

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

drop tablespace temp2 including contents and datafiles;

 

 

And here is the SQL*PLUS output:

 

11-04 17:55 MY12cDB SYS AS SYSDBA> select * from dba_temp_files;

select * from dba_temp_files

              *

ERROR at line 1:

ORA-01187: cannot read from file  because it failed verification tests

ORA-01110: data file 257: '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf'

 

 

11-04 18:02 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/data/MY12cDB/ora3/MY12cDB/TEMP2.dbf' SIZE 10M AUTOEXTEND ON  MAXSIZE 2048M;

 

Tablespace created.

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> alter database default temporary tablespace TEMP2;

 

Database altered.

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

 

PROPERTY_VALUE

--------------------------

TEMP2

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> col PROPERTY_VALUE for a10

11-04 18:04 MY12cDB SYS AS SYSDBA> drop tablespace temp including contents and datafiles;

 

Tablespace dropped.

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M

*

ERROR at line 1:

ORA-01119: error in creating database file '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf'

ORA-27038: created file already exists

Additional information: 1

 

 

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> !ls -lh /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

-rw-rw---- 1 oracle oracle 257M Jan 26 10:05 /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> !rm /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

 

Tablespace created.

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> alter database default temporary tablespace TEMP;

 

Database altered.

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

 

PROPERTY_V

----------

TEMP

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> drop tablespace temp2 including contents and datafiles;

 

Tablespace dropped.

 

 

11-04 18:08 MY12cDB SYS AS SYSDBA>  select * from dba_temp_files;

 

FILE_NAME

   FILE_ID TABLESPACE_NAME         BYTES     BLOCKS STATUS       RELATIVE_FNO AUTOEXTENSIB   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS

/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

         1 TEMP                104857600      12800 ONLINE                  1 YES          2147483648     262144            1  103809024       12672

 

To summarize:

  1. Create a new temporary tablespace TEMP2
  2. Make TEMP2 the default temporary tablespace
  3. Drop your original tablespace TEMP
  4. Create a new temporary tablespace TEMP
  5. Make TEMP the default temporary tablespace
  6. Drop TEMP2

And your default temporary tablespace is TEMP, as required, and has a brand new temp file that will not cause ORA-01187.

RacOneNode 11.2.0.4. Let's follow what occurs when we relocate a RacOneNode database from one node to another. Reminder: in RacOneNode, only one instance is up at a time.

 

Situation before the RELOCATE:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST1 is running on node node1

Online relocation: INACTIVE

 

On our "node1" server, let's launch the RELOCATE (from node1 to node2):

oracle@node1:/home/oracle $ srvctl relocate database -d RAC1NODE_DB -n node2

 

On the other node, let's monitor the RELOCATE with SRVCTL STATUS DATABASE:

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node1

Instance RAC1NODE_INST1 is running on node node1

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Here we can see that for a moment, while online relocation is active, 2 instances are up. This does not last long however.

 

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Soon, while online relocation is still active, our database is said to be running on the destination node, and the first instance is now down.

 

Looking at the alert.log on node 1 (the instance that is going to be shut down):

2018-03-13 02:00:00.005000 +01:00

Closing scheduler window

Restoring Resource Manager plan DEFAULT_PLAN via scheduler window

Setting Resource Manager plan DEFAULT_PLAN via parameter

2018-03-13 03:10:51.311000 +01:00

Stopping background process CJQ0

2018-03-13 16:23:17.464000 +01:00

Reconfiguration started (old inc 2, new inc 4)

List of instances:

1 2 (myinst: 1)

Global Resource Directory frozen

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Submitted all GCS remote-cache requests

Fix write in gcs resources

Reconfiguration complete

2018-03-13 16:23:20.133000 +01:00

minact-scn: Master returning as live inst:2 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:27.330000 +01:00

ALTER SYSTEM SET service_names='RAC1NODE_DB' SCOPE=MEMORY SID='RAC1NODE_INST1';

Shutting down instance (transactional local)

Stopping background process SMCO

2018-03-13 16:23:28.504000 +01:00

Shutting down instance: further logons disabled

Stopping background process QMNC

2018-03-13 16:23:34.536000 +01:00

Stopping background process MMNL

2018-03-13 16:23:35.538000 +01:00

Stopping background process MMON

2018-03-13 16:23:36.540000 +01:00

Local transactions complete. Performing immediate shutdown

License high water mark = 45

2018-03-13 16:23:39.663000 +01:00

ALTER SYSTEM SET _shutdown_completion_timeout_mins=30 SCOPE=MEMORY;

ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

SMON: disabling tx recovery

Stopping background process RCBG

2018-03-13 16:23:41.737000 +01:00

SMON: disabling cache recovery

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 16

Redo thread 1 internally disabled at seq 2772 (LGWR)

Shutting down archive processes

Archiving is disabled

ARCH shutting down

ARCH shutting down

ARCH shutting down

ARC2: Archival stopped

ARC0: Archival stopped

ARC3: Archival stopped

ARC1: Archiving disabled thread 1 sequence 2772

Archived Log entry 2792 added for thread 1 sequence 2772 ID 0xde13bfb6 dest 1:

ARCH shutting down

ARC1: Archival stopped

NOTE: Deferred communication with ASM instance

2018-03-13 16:23:42.816000 +01:00

Thread 1 closed at log sequence 2772

Successful close of redo thread 1

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 4

Completed: ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

Shutting down archive processes

Archiving is disabled

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 2

Completed: ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: force a map free for map id 2

NOTE: force a map free for map id 4

NOTE: force a map free for map id 16

2018-03-13 16:23:44.226000 +01:00

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: Shutting down MARK background process

Stopping background process VKTM

NOTE: force a map free for map id 4684

NOTE: force a map free for map id 4683

2018-03-13 16:23:45.900000 +01:00

freeing rdom 0

2018-03-13 16:23:48.283000 +01:00

Instance shutdown complete

 

and the alert.log on node 2 (the instance that is started):

2018-03-13 16:23:09.103000 +01:00

Starting ORACLE instance (normal)

...

2018-03-13 16:23:37.540000 +01:00

minact-scn: Inst 2 is now the master inc#:4 mmon proc-id:3931 status:0x7

minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000

minact-scn: Master returning as live inst:1 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:46.352000 +01:00

Reconfiguration started (old inc 4, new inc 6)

List of instances:

2 (myinst: 2)

Global Resource Directory frozen

* dead instance detected - domain 0 invalid = TRUE

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Post SMON to start 1st pass IR

Instance recovery: looking for dead threads

Submitted all GCS remote-cache requests

Post SMON to start 1st pass IR

Fix write in gcs resources

Starting background process CJQ0

Reconfiguration complete

 

End result:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: INACTIVE

We are back to normal: online relocation is over so inactive, only one instance is up and it's the one on node2.

I needeed to increase the size of a TEMPORARY tablespace in a STANDBY database.  Hey, it's a standby database, so alterations must be made to the primary database, then DataGuard will propagate those changes to the standby database.

So I went to the primary database and ran:

14:00:09 22-01 14:00 SYS > alter database tempfile '/appli/oracle/.../oradata02/****_temp_01.dbf'  AUTOEXTEND ON NEXT 10M MAXSIZE 13G  ;

 

Database altered.

180122alterMAXISEofTEMP.JPG

 

On the standby database, I was monitoring its alert log with

adrci> show alert -tail -f

waiting for a message notifying me of the change on the temp file.

To trigger off that change, I switched logfiles on the primary to create a new archived log:

14:02:37 22-01 14:02 PRIMARYDB SYS > alter system switch logfile ;

 

System altered.

 

The alert log of the standby did mention applying the newly received archived logs but my temp file on the standby did not budge.  Since all I did on the primary was to alter the MAXSIZE of my temp file, I tried something more telling:

14:35:29 22-01 14:35 PRIMARYDB SYS > ALTER DATABASE TEMPFILE '/appli/oracle/..../oradata02/****_temp_01.dbf' RESIZE 7G;

 

Database altered.

180122RESIZEofTEMP.JPG

Again, I did alter system switch logfile  to propagate the change to the standby, but to no avail.

 

I googled for that phenomenon and read on a DBI-SERVICES blog post that changes to temp files on the primary are not propagated to the standby because they do not generate redo.  So I logged out of SQL+ and started a brand-new session, in which I checked the REDO that I generate (with V$MYSTAT)  when altering my temp file:

 

15:56:17 PRIMARYDB SYS AS SYSDBA > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo entries                                   0

redo size                                      0

redo entries for lost write detection          0

redo synch poll writes                         0

redo writes                                    0

redo synch writes                              0

redo size for direct writes                    0

 

15:56:26 PRIMARYDB SYS AS SYSDBA > alter database tempfile '/appli/oracle/..../oradata02/****_temp_01.dbf' autoextend on NEXT 10M MAXSIZE 11G  ;

 

Database altered.

 

15:56:37 PRIMARYDB SYS AS SYSDBA > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo size                                    816

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

15:56:43  SYS AS SYSDBA > alter system switch logfile ;

SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

System altered.

 

15:56:59 PRIMARYDB SYS AS SYSDBA >

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo size                                    816

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

15:57:00 PRIMARYDB SYS AS SYSDBA > select BYTES/1024/1024,MAXBYTES/1024/1024,STATUS,USER_BYTES/1024/1024 UsedMB  from dba_temp_files ;

 

BYTES/1024/1024 MAXBYTES/1024/1024 STATUS                    USEDMB

--------------- ------------------ --------------------- ----------

           8192              11264 ONLINE                      8191

 

That test shows that on my 11.2.0.2 database, my alter database tempfile does generate redo (3 redo entries)

 

27-02 12:27 PRIMARYDB SYS > alter database tempfile '/appli/oracle/..../oradata02/****_temp_01.dbf' resize 8g;

 

Database altered.

 

27-02 12:30 PRIMARYDB SYS > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

---------------------------------------- -------

redo size                                    792

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes

 

27-02 12:31 PRIMARYDB SYS >  alter system switch logfile ;

 

System altered.

 

27-02 12:32 PRIMARYDB SYS >  SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

---------------------------------------- -------

redo size                                    792

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

 

ALTER TABLESPACE TEMP ADD TEMPFILE '/appli/oracle/..../oradata02/****_temp_02.dbf' SIZE 2M ; => this works even though the STANDBY is opened READ ONLY WITH APPLY.

And this generates no redo (no wonder, it's open READ-ONLY):

STATNAME                                           VALUE

-------------------------------------------- -----------------

redo entries                                           0
redo size                                              0
redo entries for lost write detection                  0
redo synch poll writes                                 0
redo writes                                            0
redo synch writes                                      0
redo size for direct writes                            0

 

 

But to add a temporary tablespace, you must first create it on the PRIMARY:

27-02 16:21 PRIMARYDB SYS > CREATE TEMPORARY TABLESPACE "TEMP2" TEMPFILE '/appli/oracle/..../oradata02/****_temp2_01.dbf' SIZE 4M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K ;

Tablespace created.

 

27-02 16:36 PRIMARYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          8192 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP2            4 /appli/oracle/..../oradata02/****_temp2_01.dbf

 

On the STANDBY, you will discover that there is no new TEMPFILE:

27-02 16:23 STANDBYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

 

But that a new TEMPORARY TABLESPACE does exist:

27-02 16:39 STANDBYDB SYS >  select contents,STATUS,TABLESPACE_NAME   from dba_tablespaces where tablespace_name like 'TEMP%';

 

CONTENTS                    STATUS       TABLESP

--------------------------- ------------ -------

TEMPORARY                   ONLINE       TEMP

TEMPORARY                   ONLINE       TEMP2

 

So you must ADD a TEMPFILE to that empty TEMPORARY TABLESPACE (yes yes, that 's what Oracle says) :

27-02 16:42 STANDBYDB SYS > ALTER TABLESPACE temp2 ADD TEMPFILE '/appli/oracle/..../oradata02/****_temp2_01.dbf' SIZE 4M AUTOEXTEND ON MAXSIZE 20g;

 

Tablespace altered.

 

27-02 16:44 STANDBYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

 

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP             2 /appli/oracle/..../oradata02/****_temp_02.dbf

YES       TEMP2            4 /appli/oracle/..../oradata02/****_temp2_01.dbf

 

Nevertheless, if you DROP that new TEMPORARY TABLESPACE on the PRIMARY:

27-02 16:39 PRIMARYDB SYS > drop tablespace temp2 including contents and datafiles ;

Tablespace dropped.

 

It does reflect immediately on the STANDBY:

27-02 16:44 STANDBYDB SYS > /

 

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP             2 /appli/oracle/..../oradata02/****_temp_02.dbf

 

And while the the PRIMARY ALERT.LOG says:

2018-02-27 16:48:53.648000 +01:00

drop tablespace temp2 including contents and datafiles

Deleted file /appli/oracle/..../oradata02/****_temp2_01.dbf

Completed: drop tablespace temp2 including contents and datafiles

180228ADDTEMPFILEdansALERTLOG.JPG

The STANDBY ALERT.LOG says it differently:

2018-02-27 16:48:55.800000 +01:00

Deleted file /appli/oracle/..../oradata02/*_temp2_01.dbf

Recovery dropped temporary tablespace 'TEMP2'

If you need a specific date format in the Oracle session of a program that you are not running interactively (and therefore, you cannot type “ALTER SESSION SET NLS_DATE_FORMAT = "DD-MM hh24:mi" ;”), you can define it in the shell that executes your program.

Here is a simple demonstration with SQL+.

Let’s create a tdform.ksh shell script that will run SQL+ 4 times, with NLS_DATE_FORMAT defined in 4 different ways.

 

#!/bin/ksh

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro nothing defined

select sysdate from dual ;

exit;

EOF

 

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

alter session set nls_date_format = "DD-MM-RR HH24:MI:SS" ;

pro NLS_DATE_FORMAT forced with an ALTER SESSION

select sysdate from dual ;

exit;

EOF

 

export NLS_DATE_FORMAT="YY-MM-DD hh24:mi"

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro NDF exported in SHELL as YY-MM-DD hh24:mi

select sysdate from dual ;

exit;

EOF

 

export NLS_DATE_FORMAT="DD-MM-RR hh24:mi:ss"

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro NDF exported in SHELL French format with seconds

select sysdate from dual ;

exit;

EOF

exit

 

Then let’s run that shell script and see if the format of “sysdate” varies each time.

 

oracle@mylinuxserver:/home/oracle $ ./tdform.ksh

nothing defined

29-JAN-18

 

Session altered.

 

NLS_DATE_FORMAT forced with an ALTER SESSION

29-01-18 17:02:04

 

NDF exported in SHELL as YY-MM-DD hh24:mi

18-01-29 17:02

 

NDF exported in SHELL French format with seconds

29-01-18 17:02:04

 

As demonstrated here, SQL+ inherits the NLS_DATE_FORMAT defined in the shell that executes it.  This is especially useful with other programs that do not allow you to run “ALTER SESSION SET NLS_DATE_FORMAT”, such as impdp, RMAN….