跳过导航
1 2 3 上一个 下一个

Dear DBA

45 个帖子

Oracle 19c

 

To demonstrate the CREATE SCHEMA command, let's create 2 users on a test database.  I love to create 2 users at once with this simple command:

 

grant create session to fd1, fd2 identified by oracle, oracle;

Grant succeeded.

 

30-NOV-19 lesfous SYSTEM > select account_status,username,created from dba_users where created > sysdate-1;

 

ACCOUNT_STATUS                   USERNAME               CREATED

-------------------------------- ---------------------- ------------------

OPEN                             FD1                    30-NOV-19

OPEN                             FD2                    30-NOV-19

 

Let's grant them some rights so as to do some work:

grant SELECT_CATALOG_ROLE, select ANY dictionary, create any table,CREATE any view, create any synonym, create trigger to FD2;
grant SELECT_CATALOG_ROLE, select ANY dictionary, create any table,CREATE any view, create any synonym, create trigger to FD1;

 

Let's grant them some quota on a tablespace so they can create data in it:

SYSTEM > alter user fd2 quota unlimited on users;

User altered.

 

SYSTEM > alter user fd1 quota unlimited on users;

User altered

 

Now let's use the CREATE SCHEMA command for user FD2 with a simple example found on the Internet:

The first thing I wanted to share with you is that the CREATE SCHEMA command should really be named CREATE SCHEMA AUTHORIZATION because the AUTHORIZATION keyword is mandatory.  So here goes:

FD2 > CREATE SCHEMA AUTHORIZATION fd2

CREATE TABLE new_product (product VARCHAR2(900), color VARCHAR2(10) , quantity NUMBER)

CREATE VIEW redproduct_view AS SELECT product, quantity FROM new_product WHERE color = 'RED'

GRANT select ON redproduct_view  TO fd1; 

 

Schema created.

 

The above command demonstates the 3 things you can do with a CREATE SCHEMA: CREATE TABLEs, CREATE VIEWs, and GRANT rights to other users.

Let's populate this new table:

FD2 > insert into new_product values ('Beads','BLUE',100);

1 row created.

FD2 > insert into new_product values ('Neckerchief','RED',2);

1 row created.

FD2 > insert into new_product values ('Waxstrips','BLUE',300);

1 row created.

FD2 > insert into new_product values ('Ding-A-Ling','RED',4);

1 row created.

 

After commiting those 4 INSERTs, I connect with FD1 and check the contents of the only view that FD1 has rights on:

FD1 > select * from fd2.redproduct_view;

PRODUCT                                    QUANTITY

---------------------------------------- ----------

Neckerchief                                       2

Ding-A-Ling                                       4

 

So this is basically what CREATE SCHEMA does: it speeds things up when you have several objects to create and rights on them to grant since you can do all that in one command.  It's useful to create a data-less schema.  If only one of the CREATEs or GRANTs generates an error, nothing is carried out: the whole CREATE SCHEMA statement is rollbacked.  Note that indexes cannot be mentioned within a CREATE SCHEMA command, so you'll have to create your indexes with separate CREATE INDEXEs after running the CREATE SCHEMA command if you want performance indexes on the tables that CREATE SCHEMA has created.

 

But I discovered another, less documented, downside to that CREATE SCHEMA command.  I wanted to create a schema with my own toolkit to monitor Oracle databases, so I tried to create this schema (here for the F1 user):

FD1 > select object_type,object_name,created from user_objects;

no rows selected

 

FD1 > CREATE SCHEMA  AUTHORIZATION FD1

    

CREATE TABLE dmk_data_files as SELECT tablespace_name , file_id

, CASE

WHEN f.tablespace_name LIKE 'SYS%' THEN 'SYSTEM'

WHEN f.tablespace_name LIKE 'UNDO%' THEN 'UNDO'

WHEN f.tablespace_name LIKE '%IDX  2  01-DEC-19 UPGR FD1 >   2    3    4    5    6  %' THEN 'INDEX'

WHEN f.tablespace_name LIKE '%INDEX%' THEN 'INDEX'

ELSE 'TABLE'

END AS tablespace_type

FROM dba_data_files f

ORDER BY tablespace_name

 

CREATE TABLE DMK_my_ash AS

SELECT /*+LEADING(x) USE_NL(h)*/ h.*

FROM dba_hist_snapshot x  ,dba_hist_active_sess_history h

WHERE x.end_interval_time >=TO_DATE('201401261100','yyyymmddhh24mi') AND x.begin_interval_time<=TO_DATE('201401261300','yyyymmddhh24mi')

AND h.sample_time BETWEEN TO_DATE('201401261100','yyyymmddhh24mi') AND TO_DATE('201401261300','yyyymmddhh24mi')

AND h.snap_id= X.snap_id AND h.dbid= x.dbid AND h.instance_number= x.instance_number

 

CREATE TABLE dmk_objects

(object_id NUMBER NOT NULL ,owner VARCHAR2(30) NOT NULL,object_name VARCHAR2(128) NOT NULL,subobject_name VARCHAR2(30)

,PRIMARY KEY (OBJECT_ID));

 

GRANT SELECT on DMK_MY_ASH to fd2 ;

Table created.

 

FD1 > FD1 >

GRANT SELECT on DMK_MY_ASH to fd2

                *

ERROR at line 1:

ORA-00942: table or view does not exist

 

 

FD1 > select object_type,object_name,created from user_objects;

OBJECT_TYPE             OBJECT_NAME            CREATED

----------------------- ---------------------- ------------------

TABLE                   DMK_OBJECTS            01-DEC-19

INDEX                   SYS_C007595            01-DEC-19

 

As you can see above, this CREATE SCHEMA command is made up of 3 CREATE TABLEs and 1 GRANT.  It did not produce the "Schema created" message but I got one "Table created." and one ORA-00942. My SELECT FROM USER_OBJECTS shows that only one of the three tables was created, along with an unamed index.  The ORA-00942 occured because the DMK_MY_ASH table was not created.  So this is very tricky because as stated before, the documentation says that if one of the statements fails, all statements are rolbacked.  But here one of our 3 tables was created.  This is because elswhere the documentation also says "The CREATE SCHEMA statement supports the syntax of these statements only as defined by standard SQL, rather than the complete syntax supported by Oracle Database." Just one sentence to explain that in my example above, the DMK_DATA_FILES table could not be created probably because of its CASE function, and the DMK_MY_ASH table could not be created probably because of its hint.  The DMK_OBJECTS table, however, was created because it uses no fancy Oracle feature. 

Little challenge to my dear readers: where did that SYS_C007595 index come from (as I did not issue any CREATE INDEX command, and the CREATE SCHEMA command does not support it anyway)?

Please comment below.

SQLServer 2016

 

You know you can store several database backups into one .bak file.  You can even store backups from different databases into the same .bak file, known as a media set.

So when looking from Windows at a .bak file, you might wonder what it actually contains.  In come RESTORE HEADERONLY and RESTORE FILELISTONLY

 

RESTORE HEADERONLY will list the names of the backupsets (whenever you backup, it creates a new backup set, the names of which appear in the "BackupName" column):

By the way, you can run that againt tempdb, it's safer.

Now if you want to see the files in each of those backupsets, you must use RESTORE FILELISTONLY  with the number in the Position column above. Let's look at the files for backupset 2:

 

 

Watchout, if you use RESTORE FILELISTONLY  without mentioning "WITH FILE":

You get the first backupset of that mediaset only!

SQLServer 2016

Right-click on the database you want to overwrite and choose tasks/restore/database:

 

In the “source” part of the “general” page, fetch your .dat file:

 

 

In order to see it, make sure you select “All files”:

 

 

You can see the contents of your .dat file by clicking on “Contents”:

Make sure the “Type” column says Database and the “Component” column says full.

 

Go to the options page: uncheck “Take tail-log backup…” and check "... WITH REPLACE" and "close existing connections...":

 

 

Back to the “General” page, click on Verify Backup Media.

You should get “Backup media verified successfully” at the top.

 

Go to the “Files” page, check “relocate files to folder” and enter the paths to the mdf and ldf files of your target database in Restore As:

 

 

Then click on OK at the bottom.  A green progress bar show the progress of the restoration:

Which should finish with:

Clicking on OK will close the restore wizard.

 

All of this can also be carried out with this T-SQL command:

RESTORE DATABASE gestiondossier

FROM disk = N'E:\temp\gestiondossier_201909080100_1.dat'

WITH replace,

MOVE 'gestiondossier' TO 'F:\MSSQL2016\MSSQL13.INSTP00\MSSQL\DATA\gestiondossier.mdf',

MOVE 'gestiondossier_log' TO 'G:\MSSQL2016\MSSQL13.INSTP00\MSSQL\LOG\gestiondossier.ldf',

   stats = 5

GO

Oracle 12.1.0.2

I had to load a metadata_only dump on several environnements. Prior to running impdp, I had to create a couple of tablespaces and a couple of users.

 

On environment 1 (ENV1), here's how I created my 2 tablespaces:

CREATE bigfile tablespace ARCHIVE_DATA datafile '/data/cr05/ora1/ARCHIVE_DATA.dbf' size 2G autoextend ON next 500M;

CREATE bigfile tablespace ARCHIVE_INDEX datafile '/data/cr05/ora2/ARCHIVE_INDEX.dbf' size 600M autoextend ON next 500M MAXSIZE 9T;

Then I created my 2 users, and used a par file to remap tablespaces and users for my impdp job, which ran as thus:

impdp system dumpfile=structure80tables.dmp parfile=import80tablesdeOAQ1dansOAL1.par job_name=metadata80tables

So it took 23 minutes total, including 20mn (1185 seconds) on loading 930 empty indexes.  As I said, this is metadata_only, so we're loading empty tables and indexes.  The resulting schema weighs 77GB:

Here you're probably wondering (as i did!) how come so much physical space for empty tables and indexes?  Impdp actually creates the segments for those objects and some indexes have a big INITIAL SEGMENT (up to 500Mb), and there are 968 indexes total!

 

Then I moved on to ENV2. On ENV2, here's how I created my 2 tablespaces:

CREATE bigfile tablespace ARCHIVE_DATA datafile '/data/pkgrcr04/ora1/OAL2/ARCHIVE_DATA.dbf' size 2G autoextend ON next 500M;

CREATE bigfile tablespace ARCHIVE_INDEX datafile '/data/pkgrcr04/ora2/OAL2/ARCHIVE_INDEX.dbf' size 60G autoextend ON next 500M MAXSIZE 9T;

 

Then I created my 2 users, and used a par file to remap tablespaces and users for my impdp job, which ran as thus:

impdp system dumpfile=structure80tables.dmp parfile=import80tablesdeOAQ1dansOAL1et2.par job_name=metadata80tables

This time it took 4 minutes total, including 77 seconds for those 930 empty indexes, because I had pre-allocated 60GB!

Dear DBA Frank

19c preupgrade.jar

发贴人 Dear DBA Frank 2019-8-20

See https://docs.oracle.com/en/database/oracle/oracle-database/19/spuss/using-the-pre-upgrade-information-tool-for-oracle-db…

 

Oracle provides a tool that runs several checks before upgrading your database, be it CDB or non-CDB, to 19c.

preugrade.jar comes with Oracle 19c, but you can also download it separately, so as to put it on several servers where you will run upgrades to 19c in the future.  Here are the contents of the zip file:

Here is an extract from the Oracle documentation:

To run preupgrade.jar, you must set your environment to the old ORACLE HOME:

 

mobo12cR1@oracle::~ # export ORACLE_BASE=/app/mobo12cR1/oracle

mobo12cR1@oracle::~ # export ORACLE_HOME=/app/mobo12cR1/oracle/product/12.1.0

mobo12cR1@oracle::~ # export ORACLE_SID=RMP2FD

mobo12cR1@oracle:RMP2FD:~ # export PATH=.:$ORACLE_HOME/bin:$PATH

 

Then run preupgrade.jar with the java from your old ORACLE_HOME:

mobo12cR1@oracle:RMP2FD:~ # $ORACLE_HOME/jdk/bin/java -jar /app/mobo12cR1/oracle/19c/rdbms/admin/preupgrade.jar TERMINAL TEXT

 

Report generated by Oracle Database Pre-Upgrade Information Tool Version

19.0.0.0.0 Build: 1 on 2019-08-13T11:30:13

 

Upgrade-To version: 19.0.0.0.0

 

=======================================

Status of the database prior to upgrade

=======================================

      Database Name:  RMP2FD

     Container Name:  RMP2FD

       Container ID:  0

            Version:  12.1.0.2.0

     DB Patch Level:  No Patch Bundle applied

         Compatible:  11.2.0.0.0

          Blocksize:  8192

           Platform:  Linux x86 64-bit

      Timezone File:  18

  Database log mode:  ARCHIVELOG

           Readonly:  FALSE

            Edition:  EE

 

  Oracle Component                       Upgrade Action    Current Status

  ----------------                       --------------    --------------

  Oracle Server                          [to be upgraded]  VALID

  Oracle XML Database                    [to be upgraded]  VALID

 

==============

BEFORE UPGRADE

==============

 

  REQUIRED ACTIONS

  ================

  None

 

  RECOMMENDED ACTIONS

  ===================

  1.  Remove initialization parameters that Oracle has obsoleted or removed.

      This action may be done now or when starting the database in upgrade mode

      using the target ORACLE HOME.

 

      Parameter

      ---------

      utl_file_dir

 

      If parameters that are obsolete or removed from the target release are

      present in the pfile/spfile, the Oracle database may not start, or it may

      start with an ORA- error.

 

  2.  Run 12.1.0.2.0 $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile invalid

      objects.  You can view the individual invalid objects with

 

        SET SERVEROUTPUT ON;

        EXECUTE DBMS_PREUP.INVALID_OBJECTS;

 

      2 objects are INVALID.

 

      There should be no INVALID objects in SYS/SYSTEM or user schemas before

      database upgrade.

 

  3.  Perform one of the following:

       1) Expire user accounts that use only the old 10G password version and

      follow the procedure recommended in Oracle Database Upgrade Guide under

      the section entitled, "Checking for Accounts Using Case-Insensitive

      Password Version".

       2) Explicitly set SQLNET.ALLOWED_LOGON_VERSION_SERVER in the 19

      SQLNET.ORA to a non-Exclusive Mode value, such as "11". (This is a short

      term approach and is not recommended because it will retain known

      security risks associated with the 10G password version.)

 

      Your database system has at least one account with only the 10G password

      version (see the PASSWORD_VERSIONS column of DBA_USERS).

 

      Starting with Oracle Database release 12.2.0.1, Exclusive Mode is the new

      default password-based authentication mode. All Exclusive Mode

      login/authentication attempts will fail for preexisting user accounts

      which only have the 10G password version and neither the 11G or 12C

      password version (see DBA_USERS.PASSWORD_VERSIONS.) For more information,

      refer to "Understanding Password Case Sensitivity and Upgrades" in the

      Oracle Database Upgrade Guide.

 

  4.  Remove the SEC_CASE_SENSITIVE_LOGON instance initialization parameter

      setting, to configure your system to use case sensitive password

      authentication by default.

 

      Your database system is configured to enforce case insensitive password

      authentication (the SEC_CASE_SENSITIVE_LOGON instance initialization

      parameter setting is FALSE).

 

      Starting with Oracle Database release 12.2, Exclusive Mode is the default

      password-based authentication mode. Case insensitive password

      authentication is not supported in Exclusive Mode. If your system needs

      to use case insensitive password authentication, Exclusive Mode must be

      switched off prior to the upgrade. See the Network Reference Manual

      chapter about the SQLNET.ORA parameter

      SQLNET.ALLOWED_LOGON_VERSION_SERVER for more details on Exclusive Mode.

 

  5.  (AUTOFIXUP) Gather stale data dictionary statistics prior to database

      upgrade in off-peak time using:

 

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

 

      Dictionary statistics do not exist or are stale (not up-to-date).

 

      Dictionary statistics help the Oracle optimizer find efficient SQL

      execution plans and are essential for proper upgrade timing. Oracle

      recommends gathering dictionary statistics in the last 24 hours before

      database upgrade.

 

      For information on managing optimizer statistics, refer to the 12.1.0.2

      Oracle Database SQL Tuning Guide.

 

  6.  (AUTOFIXUP) Directly grant ADMINISTER DATABASE TRIGGER privilege to the

      owner of the trigger or drop and re-create the trigger with a user that

      was granted directly with such. You can list those triggers using: SELECT

      OWNER, TRIGGER_NAME FROM DBA_TRIGGERS WHERE

      TRIM(BASE_OBJECT_TYPE)='DATABASE' AND OWNER NOT IN (SELECT GRANTEE FROM

      DBA_SYS_PRIVS WHERE PRIVILEGE='ADMINISTER DATABASE TRIGGER').

 

      There is one or more database triggers whose owner does not have the

      right privilege on the database.

 

      The creation of database triggers must be done by users granted with

      ADMINISTER DATABASE TRIGGER privilege. Privilege must have been granted

      directly.

 

  INFORMATION ONLY

  ================

  7.  Consider removing the following deprecated initialization parameters.

 

      Parameter

      ---------

      sec_case_sensitive_logon

 

      These deprecated parameters probably will be obsolete in a future release.

 

  8.  Ensure there is additional disk space in LOG_ARCHIVE_DEST_1 for at least

      2181 MB of archived logs.  Check alert log during the upgrade that there

      is no write error to the destination due to lack of disk space.

 

      Archiving cannot proceed if the archive log destination is full during

      upgrade.

 

      Archive Log Destination:

       Parameter    :  LOG_ARCHIVE_DEST_1

       Destination  :  /data/lxsora04/ora10/RMP2FD/arch

 

      The database has archiving enabled.  The upgrade process will need free

      disk space in the archive log destination(s) to generate archived logs to.

 

  9.  Check the Oracle Backup and Recovery User's Guide for information on how

      to manage an RMAN recovery catalog schema.

 

      If you are using a version of the recovery catalog schema that is older

      than that required by the RMAN client version, then you must upgrade the

      catalog schema.

 

      It is good practice to have the catalog schema the same or higher version

      than the RMAN client version you are using.

 

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database RMP2FD

  which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

 

    SQL>@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

=============

AFTER UPGRADE

=============

 

  REQUIRED ACTIONS

  ================

  None

 

  RECOMMENDED ACTIONS

  ===================

  10. (AUTOFIXUP) If you use the -T option for the database upgrade, then run

      $ORACLE_HOME/rdbms/admin/utluptabdata.sql after the upgrade is complete,

      to VALIDATE and UPGRADE any user tables affected by changes to

      Oracle-Maintained types.

 

      There are user tables dependent on Oracle-Maintained object types.

 

      If the -T option is used to set user tablespaces to READ ONLY during the

      upgrade, user tables in those tablespaces, that are dependent on

      Oracle-Maintained types, will not be automatically upgraded. If a type is

      evolved during the upgrade, any dependent tables need to be re-validated

      and upgraded to the latest type version AFTER the database upgrade

      completes.

 

  11. Upgrade the database time zone file using the DBMS_DST package.

 

      The database is using time zone file version 18 and the target 19 release

      ships with time zone file version 32.

 

      Oracle recommends upgrading to the desired (latest) version of the time

      zone file.  For more information, refer to "Upgrading the Time Zone File

      and Timestamp with Time Zone Data" in the 19 Oracle Database

      Globalization Support Guide.

 

  12. To identify directory objects with symbolic links in the path name, run

      $ORACLE_HOME/rdbms/admin/utldirsymlink.sql AS SYSDBA after upgrade.

      Recreate any directory objects listed, using path names that contain no

      symbolic links.

 

      Some directory object path names may currently contain symbolic links.

 

      Starting in Release 18c, symbolic links are not allowed in directory

      object path names used with BFILE data types, the UTL_FILE package, or

      external tables.

 

  13. (AUTOFIXUP) Gather dictionary statistics after the upgrade using the

      command:

 

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

 

      Oracle recommends gathering dictionary statistics after upgrade.

 

      Dictionary statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans. After a database

      upgrade, statistics need to be re-gathered as there can now be tables

      that have significantly changed during the upgrade or new tables that do

      not have statistics gathered yet.

 

  14. Gather statistics on fixed objects after the upgrade and when there is a

      representative workload on the system using the command:

 

        EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

 

      This recommendation is given for all preupgrade runs.

 

      Fixed object statistics provide essential information to the Oracle

      optimizer to help it find efficient SQL execution plans.  Those

      statistics are specific to the Oracle Database release that generates

      them, and can be stale upon database upgrade.

 

      For information on managing optimizer statistics, refer to the 12.1.0.2

      Oracle Database SQL Tuning Guide.

 

  ORACLE GENERATED FIXUP SCRIPT

  =============================

  All of the issues in database RMP2FD

  which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by

  executing the following

 

    SQL>@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

 

==================

PREUPGRADE SUMMARY

==================

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade.log

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

  /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

Execute fixup scripts as indicated below:

 

Before upgrade:

 

Log into the database and execute the preupgrade fixups

@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

After the upgrade:

 

Log into the database and execute the postupgrade fixups

@/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/postupgrade_fixups.sql

 

Preupgrade complete: 2019-08-13T11:30:14

 

For information, preupgrade.jar generates a log in the DIR parameter of its command line or in Oracle-base/cfgtoollogs/dbunique_name/preupgrade if $ORACLE_BASE has been defined and in Oracle-home/cfgtoollogs/dbunique_name/preupgrade if no $ORACLE_BASE has been defined. The log is /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade.log

 

Then run the preupgrade_fixups.sql script that preupgrade.jar generated (if you are upgrading a CDB, several preupgrade_fixups scripts will have been generated, each bearing a PDB name):

mobo12cR1@oracle:RMP2FD:~ #  sqlplus '/as sysdba'

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 13 11:32:35 2019

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

 

SQL> @/app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

Executing Oracle PRE-Upgrade Fixup Script

 

Auto-Generated by:       Oracle Preupgrade Script

                         Version: 19.0.0.0.0 Build: 1

Generated on:            2019-08-13 11:30:10

 

For Source Database:     RMP2FD

Source Database Version: 12.1.0.2.0

For Upgrade to Version:  19.0.0.0.0

 

Preup                             Preupgrade

Action                            Issue Is

Number  Preupgrade Check Name     Remedied    Further DBA Action

------  ------------------------  ----------  --------------------------------

    1.  parameter_obsolete        NO          Manual fixup recommended.

    2.  invalid_objects_exist     NO          Manual fixup recommended.

    3.  exclusive_mode_auth       NO          Manual fixup recommended.

    4.  case_insensitive_auth     NO          Manual fixup recommended.

    5.  dictionary_stats          YES         None.

    6.  trgowner_no_admndbtrg     YES         None.

    7.  parameter_deprecated      NO          Informational only.

                                              Further action is optional.

    8.  min_archive_dest_size     NO          Informational only.

                                              Further action is optional.

    9.  rman_recovery_version     NO          Informational only.

                                              Further action is optional.

 

The fixup scripts have been run and resolved what they can. However,

there are still issues originally identified by the preupgrade that

have not been remedied and are still present in the database.

Depending on the severity of the specific issue, and the nature of

the issue itself, that could mean that your database is not ready

for upgrade.  To resolve the outstanding issues, start by reviewing

the preupgrade_fixups.sql and searching it for the name of

the failed CHECK NAME or Preupgrade Action Number listed above.

There you will find the original corresponding diagnostic message

from the preupgrade which explains in more detail what still needs

to be done.

 

PL/SQL procedure successfully completed.

 

Oracle is kind enough to tell us which issues have been solved and which have not. For the latter, we must go to preupgrade_fixups.sql and look up the appropriate paragraph. Isn't it nice that we get an explanation on what to do with those outstanding issues? 

 

mobo12cR1@oracle:RMP2FD:/app/exploit/shell # view +/parameter_obsolete /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

 

So the first issue is the obsolete utl_file_dir parameter. Let's get rid of it.

SQL>  alter system reset utl_file_dir;

System altered.

 

The second issue is the invalid_objects_exist one.  In my case it was a couple of objects in a user schema that are not used. Still I ran utlrp.sql:

SQL> @?/rdbms/admin/utlrp.sql

then had no more invalid objects.

 

The third issue is exclusive_mode_auth. Let's see what preupgrade_fixups.sql says about that:

mobo12cR1@oracle:RMP2FD:/app/exploit/shell # view +/exclusive_mode_auth /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

To identify those accounts, use the procudere labelled ""checking for accounts using case-insensitive password version" in https://docs.oracle.com/cd/F19136_01/upgrd/database-upgrade-guide.pdf

which says that you must make a note of those accounts, make  a change in sqnet.ora, run the upgrade to 19c then expire the password of those accounts.

SELECT  username,password_versions

FROM dba_users

where ( password_versions = '10G ' OR password_versions = '10G HTTP ' ) and username !='ANONYMOUS';

So I make a note that I'll have to expire the password of that user after the upgrade (ALTER USER username PASSWORD EXPIRE;).

Right now I must add a line in my sqlnet.ora:

 

mobo12cR1@oracle:RMP2FD:/app/mobo12cR1/oracle/product/12.1.0/network/admin # echo "SQLNET.ALLOWED_LOGON_VERSION_SERVER=10" >> sqlnet.ora ; tail sqlnet.ora

DIAG_ADR_ENABLED=on

ADR_BASE=/app/mobo12cR1/oracle/diag/clients

SQLNET.ALLOWED_LOGON_VERSION_SERVER=10

 

The next one is case_insensitive_auth

view +/case_insensitive_auth /app/mobo12cR1/oracle/cfgtoollogs/RMP2FD/preupgrade/preupgrade_fixups.sql

 

Since 12.2, case-sensitive passwords are compulsory.  So I must reset my sec_case_sensitive_logon=FALSE parameter

The dictionary_stats and trgowner_no_admndbtrg issues got fixed by this wonderful preupgrade_fixups.sql and the remaining issues are optional. So we can move on to the upagrde to 19c itself (the subject of another blog post maybe).

As Tim Hall would say: hope this helps!

Let's clone a 12.1.0.2 database.

 

Copy the init file of the source database to the target machine.  Then edit that new init file so as to adapt the names and paths to the target machine.  Create any missing path.

vi initRMP2FD.ora

 

On the target machine, add a static entry for your future clone database

vi listener.ora

then reload the listener

lsnrctl reload

In the tnsnames.ora of the target machine, add an alias for the target database

vi tnsnames.ora

Test that the new alias works

tnsping RMP2FD

 

SQL> startup nomount pfile=initRMP2FD.ora

ORACLE instance started.

 

05-07 15:50 RMP2FD SYS AS SYSDBA> alter system register;

System altered.

 

On the target machine,create a password file for your newdatabase

orapwd file=orapwRMP2FD entries=5 sysbackup=y

 

On the target machine, add a new entry in oratab for your target database

vi /etc/oratab

 

Add some parallelism on the RMAN TARGET (that is, the source database):

rman TARGET sys@RMP2 AUXILIARY sys@RMP2FD

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Jul 5 16:40:07 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

target database Password:

connected to target database: RMP2 (DBID=2787549961)

auxiliary database Password:

connected to auxiliary database: RMP2FD (not mounted)

 

RMAN> configure device type disk parallelism 4 backup type to backupset ;

using target database control file instead of recovery catalog

old RMAN configuration parameters:

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:

CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters are successfully stored

 

 

On the source machine, add an entry for your target database in tnsnames.ora:

 

RMP2FD =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = targethost)(PORT = 1568))

    )

    (CONNECT_DATA =

      (SID = RMP2FD)

    )

  )

 

Then, on the target machine, make sure your target instance is started with a pfile and the database is not mounted:

SQL> startup nomount pfile='initRMP2FD.ora';

 

Then, on the target machine, launch RMAN connecting to both the source (TARGET) database and the target (AUXILIARY) database:

rman TARGET sys@RMP2 AUXILIARY sys@RMP2FD

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Jul 10 12:43:08 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

target database Password:

connected to target database: RMP2 (DBID=2787549961)

auxiliary database Password:

connected to auxiliary database: RMP2FD (not mounted)

 

Then, run your RMAN "DUPLICATE DATABASE FROM ACTIVE USING COMPRESSED BACKUPSET" command:

DUPLICATE DATABASE TO RMP2FD FROM ACTIVE DATABASE USING COMPRESSED BACKUPSET SPFILE

    parameter_value_convert ('RMP2','RMP2FD')

    set db_file_name_convert='/data/xp32/ora1/RMP2/','/data/xs04/ora1/RMP2FD/','/data/lxpa0032/ora2/RMP2/','/data/xs04/ora2/RMP2FD/','/data/lxpa0032/ora3/RMP2/','/data/xs04/ora3/RMP2FD/'

    set log_file_name_convert='/data/lxpa0032/orared01/RMP2/','/data/xs04/orared01/RMP2FD','/data/lxpa0032/orared02/RMP2/','/data/xs04/orared02/RMP2FD'

    set audit_file_dest='/app/xs04/oracle/adm/RMP2FD/audit'

    set core_dump_dest='/app/xs04/oracle/diag/rdbms/unknown/RMP2FD/cdump'

    set control_files='/data/xs04/ora1/RMP2FD/ctl/control01.ctl','/data/xs04/ora1/RMP2FD/ctl/control02.ctl','/data/xs04/ora1/RMP2FD/ctl/control03.ctl'

    set db_name='RMP2FD'

    set diagnostic_dest='/app/xs04/oracle/'

    set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xs04)(PORT=36080))'

    set log_archive_dest_1='location=/data/xs04/ora10/RMP2FD/arch'

    set log_archive_dest_2='location=/data/xs04/ora11/RMP2FD/arch'

    set utl_file_dir='/app/xs04/oracle/adm/RMP2FD/log'

      NOFILENAMECHECK;

Make sure your various "set" commands convert all the paths that must be converted (have a look at all the paths mentioned in your source database init file).

 

Then, watch RMAN do its magic:

Starting Duplicate Db at 10-JUL-19

using target database control file instead of recovery catalog

allocated channel: ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: SID=12 device type=DISK

current log archived

contents of Memory Script:

{

   restore clone from service  'RMP2' using compressed backupset

   spfile to  '/app/lxsora04/oracle/product/12.1.0/dbs/spfileRMP2FD.ora';

   sql clone "alter system set spfile= ''/app/lxsora04/oracle/product/12.1.0/dbs/spfileRMP2FD.ora''";

}

executing Memory Script

Starting restore at 10-JUL-19

using channel ORA_AUX_DISK_1

 

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using compressed network backup set from service RMP2

channel ORA_AUX_DISK_1: restoring SPFILE

...

 

I got one error only:

Executing: alter database enable block change tracking using file '/data/xp32/dump1/oracle/RMP2/save/RMAN/trackRMP2'

ORACLE error from auxiliary database: ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '/data/xp32/dump1/oracle/RMP2/save/RMAN/trackRMP2'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory

because I had not entered a "set db_file_name_convert" for the path of the block change tracking file.  Fortunately, RMAN ignored that error and continued:

Ignoring error, reattempt command after duplicate finishes

contents of Memory Script:

{

   Alter clone database open resetlogs;

}

executing Memory Script

database opened

Finished Duplicate Db at 10-JUL-19

 

My target database is now a clone of my source database:

I was asked to give my opinion on someone else's analysis of a performance run.   Here's the story.

A benchmark environment was set up at one-fourth of the production environment, to run a stress test.  The stress test ran from 2:57PM to 4:35PM on May 16.  Just for the heck of it, here are some numbers about that run:

The performance team report of that run starts with a bunch of figures about the machine resources, the transaction rate, the response times per web service etc.  Then appears the "Recommendations for the database" part of the report.  Here is how it starts:

On the screenshot above, you will have recognized the "Top Activity" part of OEM.  The grey vertical bar starts at 3:12PM so focuses on the 1st half of our stress test.  Yes yes, the small mostly green mountain on the right-hand side.  For the curious among you, the big mountain from about 4AM to 6AM was the loading of the test data.

1st comment:

During the whole duration of the stress test (with 50 users), the AAS (Average Active Sessions) never goes above 0.6 and the datatabase activity is mostly CPU (the green color).  These 2 facts tell me that whatever the users were doing did not require much resources and those users must not have suffered any waiting (at least not from the database). I asked and was told that no, there was no complaint about peformance during the stress test.

 

The first sentence of the analysis, which you can see at the top of the screenshot, says in French "3 SQL statements use lots of resources (presence of Full Table Scans)".  The said 3 statements must be the 3 we can see at the bottom left-hand corner of the screenshot.  The analysis goes on to make recommendations on each of those 3.

 

2nd comment:

Given that the whole stress test did not use much resources, how could 3 SQL statements use "lots of resources"?  Unless what is meant is "most of the resources used by the stress test".  According to my first comment, the stress test gave satisfactory results in terms of performance.  So why even bother looking at what used most of the little CPU that was used?

 

3rd comment:

The analysis says those 3 statements used "a lot of resources" but the screenshot shows they used CPU only (they're all green).  OK, so they did use some CPU -- but that's what we like our databases to do right (that's what we pay Oracle licenses per CPU for)?

 

4th comment:

The analysis goes on to say that the statements used lots of resources because there were full table scans. OK, we've already established that those statements did not use lots of resources.  But the impact of a full table scan will only be according to the size of the tables.  If the tables that were fully scanned only weigh a few MBs, surely the impact of the FTS can be sneezed at.  Maybe that analysis meant that there were unnecessary table scans -- because FTS can be the best option, especially in the case of small tables.

 

Then there is a recommendation for the first of those 3 SQL statements:

The recommendation ("préconisation" in French) is to create a performance index on the 3 columns of table COTISATION that are mentionned in the FROM and WHERE clauses.

 

5th comment:

Apart from the fact that no evidence of the full table scan is shown, no justification for that 3-column index is given.  Again, the full table scan could be justified.  Maybe an index on CD_ORIGINE,ID_VEHICULE (with the 1st column compressed possibly) would suffice.  Maybe those 2 tables could be a clustered table since they share that ID_GARANTIE.

 

There ensues a recommandation for the 2nd SELECT:

This time, the analysis says only "investigate the creation of a performance index to avoid the full table scan on table PARAM_FRANCHISE_VALEUR".

 

6th comment:

Again, no justification.  With no comparison between an FTS and an index path, one cannot claim that an index is required.

 

Lastly, a recommandation about the 3rd SQL statement:

 

This time we're given a piece of the explain plan. The analys suggests the creation of 2 indexes, one on the PARAM_LIBELLE table and another one on the PARAM_REP_AUTO table, in the hope of doing away with 2 FTSs. 

 

7th comment:

Apart from the fact that again there is no justification for the bad FTS, it seems that there is only one FTS.  Given that the INDEX FAST FULL SCAN is on an index named X1_EDA_PARAM_REP_AUTO_IX5, I have a feeling that index belongs to table PARAM_REP_AUTO, so that table is accessed by rowid, not fully scanned.

 

8th comment:

And lastly, the FTS that did take place has a cost of 15, which is negligeable.

 

I do not know the person who wrote that analysis.  I'm sharing this with you all in the hope of sharing useful information and of learning something new from your feedback!

My conclusions from this exercise are:

  1. don't spend time trying to improve what is working well already (unless there is a need for scalability)
  2. I love green!  If the SQL code produces green in OEM Top Activity, then I'm happy
  3. Oracle performance is complex and many things that are often bad can often be good, so it's best to test and substantiate one's claims with empiric evidence

 

Please add any comment to that analysis you feel would be useful, and please do comment on my comments above!

Recently I discovered the "relink" binary that sits in the bin folder of the grid infrastructure home (RAC) and the Oracle home (database).  I learned that the Oracle binaries in a GI for Linux image are not compiled.  When you install that image, the Oracle Universall Installer links those binaries with the OS's appropriate libraries to compile the binaries.  So when those OS librairies are updated/modified, it can lead to issues with the Oracle binaries.  Hence the Oracle relink tool, which relinks the Oracle binaries to the (possibly new) OS libraries.

See this pages in the Oracle documentation for reference:

https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/relinking-oracle-grid-infrastructure-for-a-cluster-b…

 

This article on Oracle-help explains how to use relink (for different recent versions of Oracle):

http://oracle-help.com/oracle-rac/relink-oracle-12c-grid-infrastructure-and-rdbms-binaries/

Here is what I did to use relink on a 12.1.0.2 GRID_HOME:

 

root@rac01: /tmp # $ORACLE_HOME/crs/install/rootcrs.sh -unlock

Using configuration parameter file: /app/grid/12.1.0/crs/install/crsconfig_params

2019/04/19 10:54:41 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2019/04/19 10:54:49 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2019/04/19 10:54:50 CLSRSC-347: Successfully unlock /app/grid/12.1.0

rac01@gridis::~ #  export PATH=$GRID_HOME/bin:$PATH

rac01@gridis::~ # $GRID_HOME/bin/relink all

writing relink log to: /app/grid/12.1.0/install/relink.log

Then repeat those steps on the second node:

root@rac02: /tmp # cd $ORACLE_HOME/rdbms/install

root@rac02: /app/grid/12.1.0/rdbms/install # ./rootadd_rdbms.sh

root@rac02: /app/grid/12.1.0/rdbms/install # cd $ORACLE_HOME/crs/install

root@rac02: /app/grid/12.1.0/crs/install # ./rootcrs.sh -patch => do this on both nodes.  It will start the cluster.

 

Note the 4 steps:

  1. first all the cluster resources must be stopped on the 1st node where you're going to run relink
  2. then, as root, unlock the Oracle binaries, with rootcrs.sh -unlock
  3. then, as the GI owner, run relink all
  4. then, as root, lock the binaries , with /rootadd_rdbms.sh then rootcrs.sh -patch (in 12cR1, it's different in 12cR2, see above Oracle-help article).  This will start your cluster on the current node.

Then repeat those steps on all the other nodes of your RAC.

Hello fellow DBAs,

I consider my tweets to be just as useful (if not more) as a full blog post. So I'm going to publish compilations of useful tweets, because when you read a bunch of them back to back, you realize there's valuable content in there.  So here are the tweets I consider useful from January 2018 (I was doing a lot of DataGuard back then).  Enjoy!

 

 

 

Lately I have applied PSUs to 12cR1 QA and PROD Grid Infrastructure environnements.  I have run into a number of problems and journaled them.  I then created a Korn shell script that checks for the causes of those various problems.  But also, I have developed the following procedure for applying a PSU (here to a 2-node 12cR1 RAC environment).

 

Let's apply the July 2018 PSU to the node1/node2 12cR1 RAC cluster.

 

    Save GRID_HOME on node1:

First, clean up GRID_HOME (so as to save space and have a smaller backup): root@node1: /u01/GI/12.1.0/.patch_storage # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   57G   28G  68% /app

root@node1: /u01/GI/12.1.0/.patch_storage # rm -rf 25363740_Mar_7_2017_23_34_43 25078431_Jan_25_2017_05_30_05 25363750_Feb_2_2017_23_57_22

root@node1: /u01/GI/12.1.0/.patch_storage # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   53G   32G  64% /app

root@node1: /u01/GI/12.1.0/.patch_storage # nice --8 tar cfz - /u01/GI/12.1.0  > /u01/GI/GRIDHOMEavril2018.tgz    => started at 10H01

root@node1: /home/giowner # ls -rtlh /u01/GI/*HOMEavril2018*gz

-rw-r--r-- 1 root root 6.2G Jan 29 10:22 /u01/GI/GRIDHOMEavril2018.tgz

     

        Carry-out pre-patch checks (with my in-house shell script):

root@node1: /tmp # ksh /home/giowner/vérifspréoupostPSU.ksh

Description de la log (en 1 seul mot) ?

avantPSUJUILL2018

 

Patch level status of Cluster nodes :  818769343                       node2,node1

OPatch Version: 12.2.0.1.8

 

        Save ORACLE_HOME on node1:

root@node1: /u01/GI/12.1.0/.patch_storage # nice --7 tar cfz - /u01/OH/oracle/product/12.1.0 > /u01/GI/ORACLEHOMEavril2018.tgz &

[1] 33164

 

           Save GRID_HOME on node2:

First, do some housekeeping to save space.         

root@node2: /u01/GI # df -h .

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/vg10-lv1001   89G   55G   30G  65% /app

root@node2: /u01/GI # du -h --max-depth=1 /u01/GI/12.1.0/|sort -h        => great command to sort per size

8.0K    /u01/GI/12.1.0/eons

8.0K    /u01/GI/12.1.0/gnsd

8.0K    /u01/GI/12.1.0/QOpatch

8.0K    /u01/GI/12.1.0/utl

12K     /u01/GI/12.1.0/ctss

12K     /u01/GI/12.1.0/dc_ocm

12K     /u01/GI/12.1.0/gipc

12K     /u01/GI/12.1.0/hs

12K     /u01/GI/12.1.0/mdns

12K     /u01/GI/12.1.0/ohasd

12K     /u01/GI/12.1.0/ologgerd

12K     /u01/GI/12.1.0/osysmond

12K     /u01/GI/12.1.0/scheduler

16K     /u01/GI/12.1.0/diagnostics

16K     /u01/GI/12.1.0/slax

20K     /u01/GI/12.1.0/olap

24K     /u01/GI/12.1.0/addnode

32K     /u01/GI/12.1.0/.opatchauto_storage

36K     /u01/GI/12.1.0/auth

40K     /u01/GI/12.1.0/wlm

68K     /u01/GI/12.1.0/relnotes

116K    /u01/GI/12.1.0/clone

228K    /u01/GI/12.1.0/css

256K    /u01/GI/12.1.0/gpnp

312K    /u01/GI/12.1.0/racg

384K    /u01/GI/12.1.0/precomp

428K    /u01/GI/12.1.0/sqlplus

528K    /u01/GI/12.1.0/dbs

536K    /u01/GI/12.1.0/xag

540K    /u01/GI/12.1.0/xdk

556K    /u01/GI/12.1.0/wwg

732K    /u01/GI/12.1.0/ucp

1.5M    /u01/GI/12.1.0/instantclient

1.6M    /u01/GI/12.1.0/plsql

1.7M    /u01/GI/12.1.0/owm

2.0M    /u01/GI/12.1.0/deinstall

2.6M    /u01/GI/12.1.0/opmn

2.7M    /u01/GI/12.1.0/has

6.4M    /u01/GI/12.1.0/evm

6.4M    /u01/GI/12.1.0/network

11M     /u01/GI/12.1.0/install

12M     /u01/GI/12.1.0/ldap

14M     /u01/GI/12.1.0/demo

15M     /u01/GI/12.1.0/cdata

20M     /u01/GI/12.1.0/sqlpatch

22M     /u01/GI/12.1.0/srvm

29M     /u01/GI/12.1.0/log

32M     /u01/GI/12.1.0/ord

32M     /u01/GI/12.1.0/oui

38M     /u01/GI/12.1.0/jdbc

48M     /u01/GI/12.1.0/dmu

48M     /u01/GI/12.1.0/nls

53M     /u01/GI/12.1.0/oracore

74M     /u01/GI/12.1.0/jlib

74M     /u01/GI/12.1.0/perl

76M     /u01/GI/12.1.0/suptools

77M     /u01/GI/12.1.0/cv

78M     /u01/GI/12.1.0/rest

99M     /u01/GI/12.1.0/crs

104M    /u01/GI/12.1.0/md

132M    /u01/GI/12.1.0/cfgtoollogs

147M    /u01/GI/12.1.0/oc4j

160M    /u01/GI/12.1.0/jdk

201M    /u01/GI/12.1.0/OPatch

208M    /u01/GI/12.1.0/crf

262M    /u01/GI/12.1.0/rdbms

263M    /u01/GI/12.1.0/assistants

336M    /u01/GI/12.1.0/javavm

910M    /u01/GI/12.1.0/tfa

985M    /u01/GI/12.1.0/lib

1.5G    /u01/GI/12.1.0/inventory

2.1G    /u01/GI/12.1.0/bin

3.1G    /u01/GI/12.1.0/usm

12G     /u01/GI/12.1.0/.patch_storage

23G     /u01/GI/12.1.0/

root@node2: /u01/GI # cd /u01/GI/12.1.0/.patch_storage

root@node2: /u01/GI/12.1.0/.patch_storage # ls -lrth

total 128K

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21436941_Aug_13_2015_04_00_40

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 23854735_Sep_29_2016_23_50_00

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 24006101_Oct_1_2016_12_33_50

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 23054246_Jul_5_2016_07_07_59

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 22291127_Apr_6_2016_03_46_21

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21948354_Dec_20_2015_23_39_33

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 21359755_Oct_21_2015_02_52_58

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 20831110_Jul_11_2015_00_45_40

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 20299023_Mar_16_2015_22_21_54

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 19769480_Dec_15_2014_06_54_52

drwxr-xr-x  4 giowner oracle 4.0K Nov  2  2016 24007012_Aug_30_2016_00_17_17

drwxr-xr-x  4 giowner oracle 4.0K Sep 10  2017 25363740_Mar_7_2017_23_34_43

drwxr-xr-x  4 giowner oracle 4.0K Sep 10  2017 25363750_Feb_2_2017_23_57_22

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:12 26983807_Nov_8_2017_07_59_12

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:14 27338013_Feb_14_2018_10_26_39

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:15 27338020_Mar_27_2018_11_48_03

drwxr-xr-x 28 giowner oracle 4.0K Oct 15 14:15 NApply

-rw-r--r--  1 giowner oracle  12K Oct 15 14:15 record_inventory.txt

-rw-r--r--  1 giowner oracle  15K Oct 15 14:15 interim_inventory.txt

-rw-r--r--  1 giowner oracle   93 Oct 15 14:16 LatestOPatchSession.properties

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26925311_Dec_6_2017_01_18_05

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 24732082_Dec_21_2016_07_15_01

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26609783_Aug_10_2017_05_36_42

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 25755742_Jun_29_2017_09_56_57

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 25171037_Mar_7_2017_12_37_23

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 26713565_Sep_29_2017_06_57_50

drwxr-xr-x  4 giowner oracle 4.0K Oct 15 14:16 27338041_Mar_16_2018_02_05_00

root@node2: /u01/GI/12.1.0/.patch_storage # rm -rf 21436941_Aug_13_2015_04_00_40 23854735_Sep_29_2016_23_50_00 24006101_Oct_1_2016_12_33_50 23054246_Jul_5_2016_07_07_59 22291127_Apr_6_2016_03_46_21 21948354_Dec_20_2015_23_39_33 21359755_Oct_21_2015_02_52_58 20831110_Jul_11_2015_00_45_40 20299023_Mar_16_2015_22_21_54 19769480_Dec_15_2014_06_54_52 24007012_Aug_30_2016_00_17_17 25363740_Mar_7_2017_23_34_43 25363750_Feb_2_2017_23_57_22

nice --8 tar cfz - /u01/GI/12.1.0 > /u01/GI/GRIDHOMEavril2018.tgz    => started at 10H21

root@node2: /u01/GI/12.1.0/.patch_storage # ls -lh /u01/GI/GRIDHOMEavril2018.tgz

-rw-r--r-- 1 root root 5.3G Jan 29 10:38 /u01/GI/GRIDHOMEavril2018.tgz

         

            Upgrade OPatch, on all nodes:

        Upgrade Opatch, for all GRID_HOMEs, as root:

scp giowner@otherserver:/u01/OH/oracle/product/12.1.0/DernierOPatch--p6880880_122010_Linux-x86-64.zip /u01/GI/12.1.0/

cd /u01/GI/12.1.0/ ; chown giowner:oracle DernierOPatch--p6880880_122010_Linux-x86-64.zip ; cd $GRID_HOME; ./OPatch/opatch version ; du -sh ./OPatch

cp -R  ./OPatch/ ./OPatch12.2.0.1.8 && du -sh ./OPatch12.2.0.1.8    => save the old OPatch, just in case

rm -Rf ./OPatch/ && unzip DernierOPatch--p6880880_122010_Linux-x86-64.zip &&  ./OPatch/opatch version ; chown -R giowner:oracle ./OPatch  ;  ls -dlah OPatch ; mv DernierOPatch--p6880880_122010_Linux-x86-64.zip /u01/OH/oracle/product/12.1.0/

        Upgrade Opatch, for all ORACLE_HOMEs, as oracle:

cd $ORACLE_HOME;./OPatch/opatch version ; du -sh ./OPatch

cp -R  ./OPatch/ ./OPatch12.2.0.1.8  && du -sh ./OPatch12.2.0.1.8

rm -Rf ./OPatch/ && unzip DernierOPatch--p6880880_122010_Linux-x86-64.zip &&  ./OPatch/opatch version ; ls -dlah OPatch

     

           Save ORACLE_HOME on node2

nice --8 tar cfz - /u01/OH/oracle/product/12.1.0 > /u01/GI/ORACLEHOMEavril2018.tgz    => started at 10H49

     

           Bring the PSU onto node1

        as root:

root@node1: /u01/GI/12.1.0 # mkdir -p /app/distrib/patch/ ; chmod g+w /app/distrib/patch/ ; chown oracle:oracle /app/distrib/patch/ ;mount -o vers=3,nolock 10.20.30.40:/patches/LinuxGI /mnt

        as giowner

cd /mnt/oracle/Linux64/12.1.0.2/ ; unzip PSUJUL2018pourRAC--p27967747_121020_Linux-x86-64.zip -d /app/distrib/patch/

     

            Preparations:

Make sure /tmp has at least 1GB of free space.

    Log some data about the cluster before applying the patch:

as giowner : /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvBEFORE; $GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvBEFORE

     

    Check patch compatibility, as root on node1:

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -analyze -logLevel FINER        => started at 10H56    OPatchAuto successful

     

    Stop any ACFS filesystems, as root:

node1@giowner:+ASM2:~ # /sbin/acfsutil registry|grep "Mount Point"|awk -F ":" '{print "/bin/umount"$2}'

/bin/umount /app/oacfsmp

/bin/umount /data/oacfsmp

root@node1: /u01/GI/12.1.0 #  mount |grep acfs  ;   

 

    Move any cluster resource that runs on this node only (as the whole cluster will be brought down on this node):

root@node1: /u01/GI/12.1.0 # crsctl eval relocate resource dollarU -s node1 -n node2 -f        => the EVAL command just says what will happen if you run this relocate command

root@node1: /u01/GI/12.1.0 # crsctl relocate resource dollarU -s node1 -n node2 -f        => the actual relocate command, which moves the dollarU resource from node1 to node2

 

       Apply the patch on node1, as root

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -logLevel FINE    => started at 11H04    Session log file is /u01/GI/12.1.0/cfgtoollogs/opatchauto/opatchauto2019-01-29_11-04-40AM.log    The id for this session is 6S82

Bringing down CRS service on home /u01/GI/12.1.0

...

It should end with "OPatchauto session completed at Tue Jan 29 11:34:33 2019 Time taken to complete the session 12 minutes, 5 seconds"

 

        Post-patch checks on node1, as giowner

node1@giowner:+ASM1:~ # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "ACFS PATCH SET UPDATE 12.1.0.2.180717 (27762277)"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

Patch description:  "WLM Patch Set Update: 12.1.0.2.180116 (26983807)"

     

        Post-patch checks on node1, as oracle

$ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

node1@oracle:MYRACDB1:/u01/OH/oracle/product/12.1.0/bin # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

     

Log some data about the cluster after applying the patch, as  giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvAFTER;$GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvAFTER;crsctl query crs activeversion -f  >> ~/$(date +%Y%m%d_)lsinvAFTER     

=> Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [818769343].

 

If everything is OK so far, and there has been no downtime, let's continue with the next node:

     

    Move any cluster resource that runs on that second node only (as the whole cluster will be brought down on this node): 

root@node2: /u01/GI/12.1.0 # crsctl eval relocate resource dollarU -s node2 -n node1 -f

root@node2: /u01/GI/12.1.0 # crsctl relocate resource dollarU -s node2 -n node1 -f

 

    Carry-out pre-patch checks (with my in-house shell script):

root@node2: /home/giowner # ksh vérifspréoupostPSU.ksh

Description de la log (en 1 seul mot) ?

avantPSUJUILL2018

 

Log any reported problem and fix it.

     

            Bring the PSU onto node2

        as root:

root@node1: /u01/GI/12.1.0 # mkdir -p /app/distrib/patch/ ; chmod g+w /app/distrib/patch/ ; chown oracle:oracle /app/distrib/patch/ ;mount -o vers=3,nolock 10.20.30.40:/patches/LinuxGI /mnt

       as giowner:

cd /mnt/oracle/Linux64/12.1.0.2/ ; unzip PSUJUL2018pourRAC--p27967747_121020_Linux-x86-64.zip -d /app/distrib/patch/

     

            Preparations on node 2:

Make sure /tmp has at least 1GB of free space.

    Log some data about the cluster before applying the patch:

as giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvBEFORE; $GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvBEFORE

     

    Check patch compatibility (dry run), as root on node1:

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -analyze -logLevel FINER        => started at 10H56    OPatchAuto successful

     

    Stop any ACFS filesystems, as root:

node1@giowner:+ASM2:~ # /sbin/acfsutil registry|grep "Mount Point"|awk -F ":" '{print "/bin/umount"$2}'

/bin/umount /app/oacfsmp

/bin/umount /data/oacfsmp

root@node1: /u01/GI/12.1.0 #  mount |grep acfs  ;   

 

 

                Apply the patch to the 2nd node, as root

export PATH=$PATH:/u01/GI/12.1.0/OPatch

/u01/GI/12.1.0/OPatch/opatchauto apply /app/distrib/patch/27967747 -logLevel FINE        => started at 15H37        Session log file is /u01/GI/12.1.0/cfgtoollogs/opatchauto/opatchauto2019-01-29_03-38-31PM.log    The id for this session is YRQF    OPatchauto session completed at Tue Jan 29 15:51:44 2019    Time taken to complete the session 13 minutes, 38 seconds

     

        Post-patch checks on node2, as giowner

node2@giowner:+ASM2:~ # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "ACFS PATCH SET UPDATE 12.1.0.2.180717 (27762277)"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

Patch description:  "WLM Patch Set Update: 12.1.0.2.180116 (26983807)"

 

       Post-patch checks on node2, as oracle

node2@oracle:MYRACDB2:/u01/OH/oracle/product/12.1.0/bin # $ORACLE_HOME/OPatch/opatch lsinventory|grep "Patch description:"

Patch description:  "OCW PATCH SET UPDATE 12.1.0.2.180717 (27762253)"

Patch description:  "Database Patch Set Update : 12.1.0.2.180717 (27547329)"

29-01 16:08 MYRACDB2 SYS AS SYSDBA> select PATCH_ID, DESCRIPTION, ACTION, STATUS from DBA_REGISTRY_SQLPATCH;

  PATCH_ID DESCRIPTION                                               ACTION          STATUS

---------- --------------------------------------------------------- --------------- ---------------

  25171037 DATABASE PATCH SET UPDATE 12.1.0.2.170418                 APPLY           SUCCESS

  27338041 DATABASE PATCH SET UPDATE 12.1.0.2.180417                 APPLY           SUCCESS

  27547329 DATABASE PATCH SET UPDATE 12.1.0.2.180717                 APPLY           SUCCESS 

 

Log some data about the cluster after applying the patch, as  giowner: /u01/GI/12.1.0/bin/kfod op=patchlvl >> ~/$(date +%Y%m%d_)lsinvAFTER;$GRID_HOME/OPatch/opatch lsinventory -detail -oh $GRID_HOME >> ~/$(date +%Y%m%d_)lsinvAFTER;crsctl query crs activeversion -f  >> ~/$(date +%Y%m%d_)lsinvAFTER

 

    When all is over, dismount the mount point of the patch:

root@node2: /home/giowner # umount /mnt

root@node1: /tmp # umount /mnt

 

     So I patched both nodes of my clusters, with no downtime, I have a backup of both my ORACLE_HOMEs and GRID_HOMEs and a backup of the previous version of OPatch, and I traced the situation before and after the patch (in the $(date +%Y%m%d_)lsinvBEFORE and $(date +%Y%m%d_)lsinvAFTER text files.  Please tell me what you think in the comments below.

Dear DBA Frank

New Quality Blog

发贴人 Dear DBA Frank 2018-11-30

A colleague of mine recently started his own blog.  He is an advanced DBA with good all-round knowledge of the Oracle database.  I encouraged him to share some really useful stuff he had shown me, so here goes:

MY DBA WORLD

I am sure many will enjoy his posts (about half-a-dozen so far) as they are clear, thorough and sightly.  Don't hesitate to leave comments on his blog posts!

Oftentimes, I have to run an EXPDP job on a server with no in-house scripts to run EXPDP.  So instead of figuring out how to write my long EXPDP command line, such as this one:

expdp system@mydb directory=DATAPUMP dumpfile=161122MYDBPROD-%U.dmp logfile=161122MYDBPRODexport.log JOB_NAME=MYDBPROD schemas=myprodschema parallel=2 EXCLUDE=STATISTICS METRICS=Y

I just copy/paste a ksh script of mine on the database server, then run it with the appropriate 5 parameters:

expdp5.ksh 12 MYBIGSCHEMA "TABLE:\"LIKE '%$%'\"" MYDB_MYBIGSCHEMA /data/share/exports/exadata/DATAPUMP
expdp5.ksh 32 SHEBANG SHEBANG MYDB_THEWHOLESHEBANG /data/share/exports/exadata/DATAPUMP

 

Here are the contents of the Korn shell script:

#!/bin/ksh

##set -x

# July 10 2018, for a one-off expdp job. Frank Dernoncourt

# be connected as the oracle Unix user

# October 30, 2018 :  added flashback_time=to_timestamp\(localtimestamp\)

 

#### 1st  parameter : degree of  parallelism

#### 2nd parameter : schema or SHEBANG for a FULL export          #### to export several schemas, enter a comma-separated list

#### 3rd parameter : EXCLUSIONS or SHEBANG          #### SHEBANG if you don't want to exclude anything.  For exclusions, just enter what you would enter after EXCLUDE=.  For instance, "STATISTICS" for "EXCLUDE=STATISTICS"

#### 4th parameter : dumpfile without .dmp

#### 5th parameter : path for the dump and the log

 

LOGFILE=expdp--${2}.log

THEDATE=$(date +%Y%m%d_%H%M)

SECONDS=0

echo ""        | tee -a ${LOGFILE}

echo "================== This is the log of $(basename ${0}), which ran at ${THEDATE} =================="        | tee -a ${LOGFILE}

 

if [ -z ${ORACLE_SID} ]

then

echo "ORACLE_SID not defined"

exit 1

fi

 

if [ $# -ne 5 ]

then

echo "Wrong number of parameters."

echo "1 : DOP"

echo "2 : schema to export or SHEBANG for FULL"

echo "3 : exclusions or SHEBANG  Exemple: "TABLE:\"LIKE '%BIN$%'\"" "          #### In this example, tables from the RECYCLING BIN will be excluded

echo "4 : dumpfile name without .dmp"

echo "5 : path for the log and dump"

exit 1

fi

 

if [[ ${1} > 1 ]]

then

echo "You chose a degree of parallelism of ${1}, which will create ${1} dumpfiles."

DOP=${1}

else

DOP="1"

fi

 

export AE=$(echo ${2} | tr '[:lower:]' '[:upper:]')

if [[ ${AE} = "SHEBANG" ]]

then

echo "You chose to run a FULL export."

AEXPORTER="FULL=Y"

else

echo "You chose to export schema ${AE}."

AEXPORTER="SCHEMAS=${AE}"

fi

 

export EXCLU=$(echo ${3} | tr '[:lower:]' '[:upper:]')

if [[ ${EXCLU} = "SHEBANG" ]]

then

echo "You chose not to exclude anything from the export."

EXCLUSIONS=""

else

echo "You chose those exclusions: ${EXCLU}."

EXCLUSIONS="EXCLUDE=${EXCLU}"

fi

 

export FDUMP=${4}

if [[ ${DOP} > 1 ]]

then

DUMPFILE="${FDUMP}%U"

else

DUMPFILE="${FDUMP}"

fi

 

export CHEMIN=${5}

if [ -d ${CHEMIN} ]

then

DIRECTORY="${CHEMIN}"

else

echo "The chosen path does not exist."

exit 1

fi

 

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

create or replace directory DATA_PUMP_UN as '${DIRECTORY}';          #### For this one-off job, I create a DIRECTORY that I drop when it's over

EOF

 

echo "The command line to be run is: expdp system flashback_time=to_timestamp(localtimestamp) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1} "                | tee -a ${LOGFILE}

 

echo "OK to continue Y/N ? "

read CONFIRMATION

if [ ${CONFIRMATION} = N ]

then exit

fi

 

expdp system flashback_time=to_timestamp\(localtimestamp\) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1}  2>>${LOGFILE}          #### Only errors are routed to the main log.  The full EXPDP log will still appear in LOGFILE in the DATA_PUMP_UN directory

 

RET=$?

echo "return code of the export: $RET" | tee -a ${LOGFILE}

 

if [ $RET -ne 0 ]

then

echo "==> UNEXPECTED RESULT" | tee -a ${LOGFILE}

else

echo "==> EXPECTED RESULT" | tee -a ${LOGFILE}

fi

 

##set +x

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

drop directory DATA_PUMP_UN ;          #### Once the job is done, I drop the DIRECTORY.

EOF

 

echo "   duration ${SECONDS} seconds" | tee -a ${LOGFILE}

date "+%n%d/%m/%Y %H:%M:%S%n" | tee -a ${LOGFILE}

echo "View the log that script in ${LOGFILE}"

 

exit $RET

 

The first version of my shell script is in French and here is what it looks like when your run it:

181031expdp5FRENCH.jpg

If anything is unclear, let me know in the comments below.  Enjoy my script!

For an SR, I'm being asked by Oracle support to provide cluster data from 4H before the problem to 4 hours after the problem.

For reference, check SRDC - Data Collection for Grid Infrastructure, Storage Management, and RAC (Doc ID 2041228.1)

Here are the command lines I used and their output.

[myracuser@racnode1 bin]$ pwd

/app/grid/12.1.0/tfa/bin

[myracuser@racnode1 bin]$

[myracuser@racnode1 bin]$ /app/grid/12.1.0/tfa/bin/tfactl diagcollect -from "Jun/1/2018 13:00:00" -to "Jun/1/2018 21:00:00"

Collecting data for all nodes

Scanning files from Jun/1/2018 13:00:00 to Jun/1/2018 21:00:00

 

Collection Id : 20180604104236racnode1

 

Repository Location in racnode1 : /app/myracuser/tfa/repository

 

Collection monitor will wait up to 30 seconds for collections to start

2018/06/04 10:42:40 CEST : Collection Name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

2018/06/04 10:42:40 CEST : Scanning of files for Collection in progress...

2018/06/04 10:42:40 CEST : Collecting extra files...

2018/06/04 10:43:10 CEST : Getting list of files satisfying time range [06/01/2018 13:00:00 CEST, 06/01/2018 21:00:00 CEST]

2018/06/04 10:43:10 CEST : Starting Thread to identify stored files to collect

2018/06/04 10:43:10 CEST : Getting List of Files to Collect

2018/06/04 10:43:10 CEST : Trimming file : racnode1/app/myracuser/crsdata/racnode1/cvu/cvutrc/cvuhelper.log.0 with original file size : 656kB

2018/06/04 10:43:11 CEST : racnode1: Zipped 100 Files so Far

2018/06/04 10:43:16 CEST : racnode1: Zipped 200 Files so Far

2018/06/04 10:43:19 CEST : racnode1: Zipped 300 Files so Far

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_oraagent_myracuser.trc with original file size : 7MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_orarootagent_root.trc with original file size : 882kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_scriptagent_myracuser.trc with original file size : 4.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/evmd.trc with original file size : 2.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/gipcd.trc with original file size : 7.6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ocssd_1.trc with original file size : 52MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/octssd.trc with original file size : 6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd.trc with original file size : 6.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdagent_root.trc with original file size : 854kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdmonitor_root.trc with original file size : 847kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_oraagent_myracuser.trc with original file size : 6.5MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_orarootagent_root_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/var/log/messages-20180603 with original file size : 4.2MB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener/trace/listener.log with original file size : 666kB

2018/06/04 10:43:21 CEST : racnode1: Zipped 400 Files so Far

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan1/trace/listener_scan1.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan2/trace/listener_scan2.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan3/trace/listener_scan3.log with original file size : 669kB

2018/06/04 10:43:31 CEST : Finished Getting List of Files to Collect

2018/06/04 10:43:40 CEST : Collecting ADR incident files...

2018/06/04 10:43:40 CEST : Waiting for collection of extra files

2018/06/04 10:45:10 CEST : Completed collection of extra files...

2018/06/04 10:45:10 CEST : Completed Zipping of all files

2018/06/04 10:45:10 CEST : Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finished Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Finished Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Total Number of Files checked : 3034

2018/06/04 10:45:10 CEST : Total Size of all Files Checked : 492MB

2018/06/04 10:45:10 CEST : Number of files containing required range : 489

2018/06/04 10:45:10 CEST : Total Size of Files containing required range : 135MB

2018/06/04 10:45:10 CEST : Number of files trimmed : 19

2018/06/04 10:45:10 CEST : Total Size of data prior to zip : 27MB

2018/06/04 10:45:10 CEST : Saved 109MB by trimming files

2018/06/04 10:45:10 CEST : Zip file size : 2.3MB

2018/06/04 10:45:10 CEST : Total time taken : 150s

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

Logs are being collected to: /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all

/app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

 

180604outputofTFACollector.jpg

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ ls -lrth

total 2.4M

-rwx------ 1 myracuser oracle 1.6K Jun  4 10:43 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip.txt

-rwx------ 1 myracuser oracle 2.4M Jun  4 10:45 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

-rwx------ 1 myracuser oracle 4.3K Jun  4 10:45 diagcollect_20180604104236_racnode1.log

 

 

The .txt file is 40-line long and is the report of the collection :

TFA Version : 12.1.2.7.0

Build ID : 12127020160304140533

 

Collection ID : 20180604104236racnode1

Zip file name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file location : /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file creation date : Mon Jun 04 2018 10:42:40 CEST

Host name : racnode1

Duration of Diagnostics :

  Start date : Fri Jun 01 2018 13:00:00 CEST

  End date : Fri Jun 01 2018 21:00:00 CEST

Component(s) in zip file : INSTALL  CRS  OS  EMAGENT  DBWLM  OCM  ACFS  RDBMS  TNS  OMS  WLS  EMPLUGINS  EM  ASMPROXY  ASM  CFGTOOLS  CHMOS  SUNDIAG

User initiated

 

Directory Structure in zip file :

racnode1/

racnode1/etc/oracle

racnode1/app/oraInventory/ContentsXML

racnode1/app/myracuser/crsdata/racnode1/output

racnode1/var/log

racnode1/tnslsnr/racnode1/listener_scan2/trace

 

 

The log file is what appeared on standard output above, which finishes with

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

 

Here is a glance at the contents of the zip file:

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ unzip -l racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Archive:  racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

  Length      Date    Time    Name

---------  ---------- -----   ----

    13658  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/lsinv/lsinventory2018-06-01_17-24-19PM.txt

    22345  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/opatch2018-06-01_17-24-19PM_1.log

   383494  06-01-2018 17:13   racnode1/app/grid/12.1.0/install/make.log

     1063  06-01-2018 15:58   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_15-58-40.log

     7402  06-01-2018 17:23   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_17-13-23.log

      329  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/comps.xml

      564  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/inventory.xml

      292  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/libs.xml

     8464  06-01-2018 17:08   racnode1/app/oraInventory/logs/UpdateNodeList2018-06-01_05-07-55-PM.log

     369  06-04-2018 10:42   racnode1/racnode1_QUERYVOTE

      127  06-04-2018 10:42   racnode1/racnode1_VERSIONS

---------                     -------

27114817                     550 files

 

I then uploaded their zip (only the zip file, as requested ) to the SR.  The SR status changed from “Customer Working” to “Review Update”:

La 18c version "sur-site" n'est pas encore disponible (elle l'est dans le nuage), mais le client Oracle 18c est sorti.

Installons ce client sur notre poste Windows 7 Enterprise. Après avoir décompressé l'archive zip, il faut lancer le setup.exe :

ScreenShot002.jpg

Si on choisit personnalisé, après avoir indiqué un utilisateur Windows qui sera propriétaire de la nouvelle ORACLE_HOME, on verra cette liste de composants à choisir :

Mais nous allons sélectionner "Administrateur" et faire "Suivant" :

On choisit "Employer l'utilisateur Windows existant" et on saisit les informations de connexion de notre utilisateur Windows:

Indiquons la ORACLE_BASE :

Ensuite, ça tourne tout seul :

Jusqu'à arriver à ce résumé :

180619client18--1.jpg

 

On peut enregistrer le fichier de réponse, qui pourra servir à une installation silencieuse ultérieure, puis on clique sur "Installer" et  ça tourne tout seul :

Puis il configure, tout seul :

Et ça doit se terminer comme ceci :

 

Si on clique sur le bouton "Aide", ça fait sourire :

 

On a maintenant, dans le chemin indiqué, un nouveau client Oracle :

Et dans le menu Démarrer de Windows, on a ces nouvelles entrées :

ScreenShot001.jpg

 

Et on peut maintenant essayer le nouveau SQL+ :

If you use Data Guard, you probably use its broker.  You may not have noticed, but whenever you make changes to your Broker configuration, those changes are stored in a file on the operating system.  The location of that file is specified in the DG_BROKER_CONFIG_FILE1 initialisation parameter.  DG_BROKER_CONFIG_FILE2 is a backup of DG_BROKER_CONFIG_FILE1, as we are going to see in this post.  Note that, as the Oracle documentation says, these initialisation parameters can be modified only when the broker is not running.

Let's look at what happens to those files when you create/delete/re-create/modify a broker configuration in Oracle 12cR1.

 

Here is the situation at 11H30 on the 29th of May (Database MYDB4 is on server machine2, and database MYDB3 is on server machine1):

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 10:03 /app/racnode1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:22 /app/racnode1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's do away with the current broker configuration:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

ORA-16620 occurs when the broker is not started on either of the databaseses.  in that case, the broker configuration files are not affected.  Let's start the broker on our STANDBY and let's do that again.

DGMGRL> remove configuration

Removed configuration

 

Now let's create a new broker configuration:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Configuration "MYDB3dgconf" created with primary database "MYDB3"

 

Since I am working on machine1, the broker configuration files on machine1 have been affected at 11:35AM:

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle  12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

 

Now let's enable that configuration:

DGMGRL>  enable configuration

Enabled.

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

As you can see, only one of the configuration files was modified (at 11:36AM).  This is because the other one is a backup of the previous state of the configuration (when it was not enabled).

 

At 11:38AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Error: ORA-16525: The Oracle Data Guard broker is not yet available.

Failed.

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

No files were modified.  When you generate an error in DGMGRL, the broker configuration files are not affected.

Here is the explanation in the Data Guard log on racnode1:

ADD DATABASE

      05/29/2018 11:38:54

      DG_BROKER_START is not set to TRUE on member MYDB4. Set the parameter to TRUE and then try to add the member to the configuration.

 

So let's correct that:

29-05 11:43 MYDB4 SYS AS SYSDBA> alter system set dg_broker_start=TRUE;

System altered.

 

Now at 11:45AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Database "MYDB4" added

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:45 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

On machine1, one configuration file was altered, and the other is a backup of the previous state.

But on racnode2, both configuration files were modified (in fact, re-created), since this is the first time this new broker configuration is generating a change on racnode2 (remember, that broker configuration was created at 11:35AM on machine1 and so far had not done anything involving racnode2).

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

 

But the STANDBY broker log reports an error:

05/29/2018 11:44:50

      Site MYDB3 returned ORA-16603.

      MYDB4 version check failed; status = ORA-16603: detected configuration unique ID mismatch

            configuration will be deleted on this database

 

Do you remember that the broker of my STANDBY was not started when I ran CREATE CONFIGURATION on the PRIMARY?  So let's re-create the broker configuration while both brokers are started.

 

At 11:55AM:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

Removed configuration

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

As you can see, when you remove a broker configuration, there is no backup of the removed configuration since both files are reset (their size is 8K, while when there was a configuration, their size was 12K).

At that point, I had to move to another subject, and my colleague finished that broker configuration.  But on the following day (the 30th of May), I had to re-create it.

Here is the situation on the morning of May30:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

At 10:40AM, I removed the broker configuration from racnode1:

DGMGRL>  remove configuration;

Removed configuration

 

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

It seems it first connects to the other server (the files on that one were reset at 10:40, while the files on machine1 were reset 1 minute later).

 

Now let's create a new broker congiration, with MYDB3 as the PRIMARY:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16584: operation cannot be performed on a standby database

Failed.

 

So let's go to the other server (machine2):

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys

Password:

Connected as SYSDG.

 

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16642: DB_UNIQUE_NAME mismatch

Failed.

This error occured at 10:44AM, and it did modify the configuration files:

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

But not on the STANDBY side (machine1):

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's create our configuration but with the right PRIMARY this time:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

As seen previously, when you create a configuration, both files are affected (but notice their difference in size):

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

But on the STANDBY side, nothing happened, as the STANDBY has not been added to the configuration yet:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's enable that configuration:

DGMGRL> enable configuration

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Again, one of the files contains the previous state of the configuration (2mn earlier, before the enablement).

 

Now, let's add a STANDBY to that configuration:

DGMGRL> add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

That error did not affect the configuration files.

 

Before we modify initialisation parameter log_archive_dest_3 (which is the cause of the ORA-16698), we must remove our configuration.  We will re-create it after that modification in our database.

DGMGRL> remove configuration;

Removed configuration

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Of course, all files are reset.

Now let's modify that parameter in both databases:

alter system set log_archive_dest_3='' scope =both ;

 

And re-create our broker configuration:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

One file on machine2 contains our new configuration while the other one is a backup of the previous state (no configuration, hence 8K).  While, on machine1, nothing happened, but you already know that.

 

DGMGRL> enable configuration;

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

One file on machine2 contains our new configuration (the one dated 11:50AM) while the other one is a backup of the previous state.

 

At 11:52AM:

GMGRL>  add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Database "MYDB3" added

 

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:52 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

The files on the STANDBY side have not yet been modifie, but you already know why.

 

At 12:02PM:

DGMGRL> enable database 'MYDB3';

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:03 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

At last, the files on the STANDBY side are modified.

And in my PRIMARY, this ENABLE DATABASE has automatically filled in the log_archive_dest_3 parameter with "service="MYDB3_dgmgrl", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="MYDB3" net_timeout=30, valid_for=(online_logfile,all_roles)".

And in my STANDBY, this ENABLE DATABASE has automatically filled in the fal_server parameter "with MYDB4_dgmgrl" and the log_archive_config parameter has changed from "nodg_config" to "dg_config=(MYDB4)".

 

If I go to machine1 and run SHOW CONFIGURATION, everything is OK:

DGMGRL> show configuration

Configuration - MYDB4dgconf

  Protection Mode: MaxPerformance

  Members:

  MYDB4 - Primary database

    MYDB3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 79 seconds ago)

 

Let's now perform a SWITCHOVER (after first running VALIDATE DATABASE for both our databases, to confirm the SWITCHOVER can go through):

DGMGRL> switchover to 'MYDB3' ;

Performing switchover NOW, please wait...

Operation requires a connection to instance "MYDB3" on database "MYDB3"

Connecting to instance "MYDB3"...

Connected as SYSDBA.

New primary database "MYDB3" is opening...

Operation requires start up of instance "MYDB4" on database "MYDB4"

Starting instance "MYDB4"...

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is "MYDB3"

 

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

machine2@oracle:MYDB4:/app/machine2/oracle/product/12.1.0/network/admin #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

Again, the configuration files on the other server were modified first.  But all 4 files have been modified, and 2 of them (one on either machines) contain the previous state of the broker configuration.