Skip navigation
1 2 3 Previous Next

Dear DBA

35 posts
Dear DBA Frank

New Quality Blog

Posted by Dear DBA Frank Nov 30, 2018

A colleague of mine recently started his own blog.  He is an advanced DBA with good all-round knowledge of the Oracle database.  I encouraged him to share some really useful stuff he had shown me, so here goes:

MY DBA WORLD

I am sure many will enjoy his posts (about half-a-dozen so far) as they are clear, thorough and sightly.  Don't hesitate to leave comments on his blog posts!

Oftentimes, I have to run an EXPDP job on a server with no in-house scripts to run EXPDP.  So instead of figuring out how to write my long EXPDP command line, such as this one:

expdp system@mydb directory=DATAPUMP dumpfile=161122MYDBPROD-%U.dmp logfile=161122MYDBPRODexport.log JOB_NAME=MYDBPROD schemas=myprodschema parallel=2 EXCLUDE=STATISTICS METRICS=Y

I just copy/paste a ksh script of mine on the database server, then run it with the appropriate 5 parameters:

expdp5.ksh 12 MYBIGSCHEMA "TABLE:\"LIKE '%$%'\"" MYDB_MYBIGSCHEMA /data/share/exports/exadata/DATAPUMP
expdp5.ksh 32 SHEBANG SHEBANG MYDB_THEWHOLESHEBANG /data/share/exports/exadata/DATAPUMP

 

Here are the contents of the Korn shell script:

#!/bin/ksh

##set -x

# July 10 2018, for a one-off expdp job. Frank Dernoncourt

# be connected as the oracle Unix user

# October 30, 2018 :  added flashback_time=to_timestamp\(localtimestamp\)

 

#### 1st  parameter : degree of  parallelism

#### 2nd parameter : schema or SHEBANG for a FULL export          #### to export several schemas, enter a comma-separated list

#### 3rd parameter : EXCLUSIONS or SHEBANG          #### SHEBANG if you don't want to exclude anything.  For exclusions, just enter what you would enter after EXCLUDE=.  For instance, "STATISTICS" for "EXCLUDE=STATISTICS"

#### 4th parameter : dumpfile without .dmp

#### 5th parameter : path for the dump and the log

 

LOGFILE=expdp--${2}.log

THEDATE=$(date +%Y%m%d_%H%M)

SECONDS=0

echo ""        | tee -a ${LOGFILE}

echo "================== This is the log of $(basename ${0}), which ran at ${THEDATE} =================="        | tee -a ${LOGFILE}

 

if [ -z ${ORACLE_SID} ]

then

echo "ORACLE_SID not defined"

exit 1

fi

 

if [ $# -ne 5 ]

then

echo "Wrong number of parameters."

echo "1 : DOP"

echo "2 : schema to export or SHEBANG for FULL"

echo "3 : exclusions or SHEBANG  Exemple: "TABLE:\"LIKE '%BIN$%'\"" "          #### In this example, tables from the RECYCLING BIN will be excluded

echo "4 : dumpfile name without .dmp"

echo "5 : path for the log and dump"

exit 1

fi

 

if [[ ${1} > 1 ]]

then

echo "You chose a degree of parallelism of ${1}, which will create ${1} dumpfiles."

DOP=${1}

else

DOP="1"

fi

 

export AE=$(echo ${2} | tr '[:lower:]' '[:upper:]')

if [[ ${AE} = "SHEBANG" ]]

then

echo "You chose to run a FULL export."

AEXPORTER="FULL=Y"

else

echo "You chose to export schema ${AE}."

AEXPORTER="SCHEMAS=${AE}"

fi

 

export EXCLU=$(echo ${3} | tr '[:lower:]' '[:upper:]')

if [[ ${EXCLU} = "SHEBANG" ]]

then

echo "You chose not to exclude anything from the export."

EXCLUSIONS=""

else

echo "You chose those exclusions: ${EXCLU}."

EXCLUSIONS="EXCLUDE=${EXCLU}"

fi

 

export FDUMP=${4}

if [[ ${DOP} > 1 ]]

then

DUMPFILE="${FDUMP}%U"

else

DUMPFILE="${FDUMP}"

fi

 

export CHEMIN=${5}

if [ -d ${CHEMIN} ]

then

DIRECTORY="${CHEMIN}"

else

echo "The chosen path does not exist."

exit 1

fi

 

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

create or replace directory DATA_PUMP_UN as '${DIRECTORY}';          #### For this one-off job, I create a DIRECTORY that I drop when it's over

EOF

 

echo "The command line to be run is: expdp system flashback_time=to_timestamp(localtimestamp) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1} "                | tee -a ${LOGFILE}

 

echo "OK to continue Y/N ? "

read CONFIRMATION

if [ ${CONFIRMATION} = N ]

then exit

fi

 

expdp system flashback_time=to_timestamp\(localtimestamp\) directory=DATA_PUMP_UN METRICS=Y ${AEXPORTER} ${EXCLUSIONS} JOB_NAME=ex${AE} LOGFILE=exportde${AE}.log DUMPFILE=${DUMPFILE}.dmp PARALLEL=${DOP:-1}  2>>${LOGFILE}          #### Only errors are routed to the main log.  The full EXPDP log will still appear in LOGFILE in the DATA_PUMP_UN directory

 

RET=$?

echo "return code of the export: $RET" | tee -a ${LOGFILE}

 

if [ $RET -ne 0 ]

then

echo "==> UNEXPECTED RESULT" | tee -a ${LOGFILE}

else

echo "==> EXPECTED RESULT" | tee -a ${LOGFILE}

fi

 

##set +x

sqlplus -s /nolog <<EOF

connect / as sysdba

whenever sqlerror exit

drop directory DATA_PUMP_UN ;          #### Once the job is done, I drop the DIRECTORY.

EOF

 

echo "   duration ${SECONDS} seconds" | tee -a ${LOGFILE}

date "+%n%d/%m/%Y %H:%M:%S%n" | tee -a ${LOGFILE}

echo "View the log that script in ${LOGFILE}"

 

exit $RET

 

The first version of my shell script is in French and here is what it looks like when your run it:

181031expdp5FRENCH.jpg

If anything is unclear, let me know in the comments below.  Enjoy my script!

For an SR, I'm being asked by Oracle support to provide cluster data from 4H before the problem to 4 hours after the problem.

For reference, check SRDC - Data Collection for Grid Infrastructure, Storage Management, and RAC (Doc ID 2041228.1)

Here are the command lines I used and their output.

[myracuser@racnode1 bin]$ pwd

/app/grid/12.1.0/tfa/bin

[myracuser@racnode1 bin]$

[myracuser@racnode1 bin]$ /app/grid/12.1.0/tfa/bin/tfactl diagcollect -from "Jun/1/2018 13:00:00" -to "Jun/1/2018 21:00:00"

Collecting data for all nodes

Scanning files from Jun/1/2018 13:00:00 to Jun/1/2018 21:00:00

 

Collection Id : 20180604104236racnode1

 

Repository Location in racnode1 : /app/myracuser/tfa/repository

 

Collection monitor will wait up to 30 seconds for collections to start

2018/06/04 10:42:40 CEST : Collection Name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

2018/06/04 10:42:40 CEST : Scanning of files for Collection in progress...

2018/06/04 10:42:40 CEST : Collecting extra files...

2018/06/04 10:43:10 CEST : Getting list of files satisfying time range [06/01/2018 13:00:00 CEST, 06/01/2018 21:00:00 CEST]

2018/06/04 10:43:10 CEST : Starting Thread to identify stored files to collect

2018/06/04 10:43:10 CEST : Getting List of Files to Collect

2018/06/04 10:43:10 CEST : Trimming file : racnode1/app/myracuser/crsdata/racnode1/cvu/cvutrc/cvuhelper.log.0 with original file size : 656kB

2018/06/04 10:43:11 CEST : racnode1: Zipped 100 Files so Far

2018/06/04 10:43:16 CEST : racnode1: Zipped 200 Files so Far

2018/06/04 10:43:19 CEST : racnode1: Zipped 300 Files so Far

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_oraagent_myracuser.trc with original file size : 7MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_orarootagent_root.trc with original file size : 882kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/crsd_scriptagent_myracuser.trc with original file size : 4.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/evmd.trc with original file size : 2.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/gipcd.trc with original file size : 7.6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ocssd_1.trc with original file size : 52MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/octssd.trc with original file size : 6MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd.trc with original file size : 6.3MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdagent_root.trc with original file size : 854kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_cssdmonitor_root.trc with original file size : 847kB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_oraagent_myracuser.trc with original file size : 6.5MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/crs/racnode1/crs/trace/ohasd_orarootagent_root_1.trc with original file size : 10MB

2018/06/04 10:43:20 CEST : Trimming file : racnode1/var/log/messages-20180603 with original file size : 4.2MB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener/trace/listener.log with original file size : 666kB

2018/06/04 10:43:21 CEST : racnode1: Zipped 400 Files so Far

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan1/trace/listener_scan1.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan2/trace/listener_scan2.log with original file size : 669kB

2018/06/04 10:43:21 CEST : Trimming file : racnode1/tnslsnr/racnode1/listener_scan3/trace/listener_scan3.log with original file size : 669kB

2018/06/04 10:43:31 CEST : Finished Getting List of Files to Collect

2018/06/04 10:43:40 CEST : Collecting ADR incident files...

2018/06/04 10:43:40 CEST : Waiting for collection of extra files

2018/06/04 10:45:10 CEST : Completed collection of extra files...

2018/06/04 10:45:10 CEST : Completed Zipping of all files

2018/06/04 10:45:10 CEST : Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finished Cleaning up temporary files

2018/06/04 10:45:10 CEST : Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Finished Finalizing the Collection Zip File

2018/06/04 10:45:10 CEST : Total Number of Files checked : 3034

2018/06/04 10:45:10 CEST : Total Size of all Files Checked : 492MB

2018/06/04 10:45:10 CEST : Number of files containing required range : 489

2018/06/04 10:45:10 CEST : Total Size of Files containing required range : 135MB

2018/06/04 10:45:10 CEST : Number of files trimmed : 19

2018/06/04 10:45:10 CEST : Total Size of data prior to zip : 27MB

2018/06/04 10:45:10 CEST : Saved 109MB by trimming files

2018/06/04 10:45:10 CEST : Zip file size : 2.3MB

2018/06/04 10:45:10 CEST : Total time taken : 150s

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

Logs are being collected to: /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all

/app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

 

180604outputofTFACollector.jpg

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ ls -lrth

total 2.4M

-rwx------ 1 myracuser oracle 1.6K Jun  4 10:43 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip.txt

-rwx------ 1 myracuser oracle 2.4M Jun  4 10:45 racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

-rwx------ 1 myracuser oracle 4.3K Jun  4 10:45 diagcollect_20180604104236_racnode1.log

 

 

The .txt file is 40-line long and is the report of the collection :

TFA Version : 12.1.2.7.0

Build ID : 12127020160304140533

 

Collection ID : 20180604104236racnode1

Zip file name : tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file location : /app/myracuser/tfa/repository/collection_Mon_Jun_4_10_42_36_CEST_2018_node_all/racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Zip file creation date : Mon Jun 04 2018 10:42:40 CEST

Host name : racnode1

Duration of Diagnostics :

  Start date : Fri Jun 01 2018 13:00:00 CEST

  End date : Fri Jun 01 2018 21:00:00 CEST

Component(s) in zip file : INSTALL  CRS  OS  EMAGENT  DBWLM  OCM  ACFS  RDBMS  TNS  OMS  WLS  EMPLUGINS  EM  ASMPROXY  ASM  CFGTOOLS  CHMOS  SUNDIAG

User initiated

 

Directory Structure in zip file :

racnode1/

racnode1/etc/oracle

racnode1/app/oraInventory/ContentsXML

racnode1/app/myracuser/crsdata/racnode1/output

racnode1/var/log

racnode1/tnslsnr/racnode1/listener_scan2/trace

 

 

The log file is what appeared on standard output above, which finishes with

2018/06/04 10:45:10 CEST : Completed collection of zip files.

 

 

Here is a glance at the contents of the zip file:

[myracuser@racnode1 collection_Mon_Jun_4_10_42_36_CEST_2018_node_all]$ unzip -l racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

Archive:  racnode1.tfa_Mon_Jun_4_10_42_36_CEST_2018.zip

  Length      Date    Time    Name

---------  ---------- -----   ----

    13658  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/lsinv/lsinventory2018-06-01_17-24-19PM.txt

    22345  06-01-2018 17:24   racnode1/app/grid/12.1.0/cfgtoollogs/opatch/opatch2018-06-01_17-24-19PM_1.log

   383494  06-01-2018 17:13   racnode1/app/grid/12.1.0/install/make.log

     1063  06-01-2018 15:58   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_15-58-40.log

     7402  06-01-2018 17:23   racnode1/app/grid/12.1.0/install/root_racnode1_2018-06-01_17-13-23.log

      329  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/comps.xml

      564  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/inventory.xml

      292  06-01-2018 17:40   racnode1/app/oraInventory/ContentsXML/libs.xml

     8464  06-01-2018 17:08   racnode1/app/oraInventory/logs/UpdateNodeList2018-06-01_05-07-55-PM.log

     369  06-04-2018 10:42   racnode1/racnode1_QUERYVOTE

      127  06-04-2018 10:42   racnode1/racnode1_VERSIONS

---------                     -------

27114817                     550 files

 

I then uploaded their zip (only the zip file, as requested ) to the SR.  The SR status changed from “Customer Working” to “Review Update”:

La 18c version "sur-site" n'est pas encore disponible (elle l'est dans le nuage), mais le client Oracle 18c est sorti.

Installons ce client sur notre poste Windows 7 Enterprise. Après avoir décompressé l'archive zip, il faut lancer le setup.exe :

ScreenShot002.jpg

Si on choisit personnalisé, après avoir indiqué un utilisateur Windows qui sera propriétaire de la nouvelle ORACLE_HOME, on verra cette liste de composants à choisir :

Mais nous allons sélectionner "Administrateur" et faire "Suivant" :

On choisit "Employer l'utilisateur Windows existant" et on saisit les informations de connexion de notre utilisateur Windows:

Indiquons la ORACLE_BASE :

Ensuite, ça tourne tout seul :

Jusqu'à arriver à ce résumé :

180619client18--1.jpg

 

On peut enregistrer le fichier de réponse, qui pourra servir à une installation silencieuse ultérieure, puis on clique sur "Installer" et  ça tourne tout seul :

Puis il configure, tout seul :

Et ça doit se terminer comme ceci :

 

Si on clique sur le bouton "Aide", ça fait sourire :

 

On a maintenant, dans le chemin indiqué, un nouveau client Oracle :

Et dans le menu Démarrer de Windows, on a ces nouvelles entrées :

ScreenShot001.jpg

 

Et on peut maintenant essayer le nouveau SQL+ :

If you use Data Guard, you probably use its broker.  You may not have noticed, but whenever you make changes to your Broker configuration, those changes are stored in a file on the operating system.  The location of that file is specified in the DG_BROKER_CONFIG_FILE1 initialisation parameter.  DG_BROKER_CONFIG_FILE2 is a backup of DG_BROKER_CONFIG_FILE1, as we are going to see in this post.  Note that, as the Oracle documentation says, these initialisation parameters can be modified only when the broker is not running.

Let's look at what happens to those files when you create/delete/re-create/modify a broker configuration in Oracle 12cR1.

 

Here is the situation at 11H30 on the 29th of May (Database MYDB4 is on server machine2, and database MYDB3 is on server machine1):

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 29 10:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 10:03 /app/racnode1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:22 /app/racnode1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's do away with the current broker configuration:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

ORA-16620 occurs when the broker is not started on either of the databaseses.  in that case, the broker configuration files are not affected.  Let's start the broker on our STANDBY and let's do that again.

DGMGRL> remove configuration

Removed configuration

 

Now let's create a new broker configuration:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Configuration "MYDB3dgconf" created with primary database "MYDB3"

 

Since I am working on machine1, the broker configuration files on machine1 have been affected at 11:35AM:

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle  12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

 

Now let's enable that configuration:

DGMGRL>  enable configuration

Enabled.

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

As you can see, only one of the configuration files was modified (at 11:36AM).  This is because the other one is a backup of the previous state of the configuration (when it was not enabled).

 

At 11:38AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Error: ORA-16525: The Oracle Data Guard broker is not yet available.

Failed.

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:35 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

No files were modified.  When you generate an error in DGMGRL, the broker configuration files are not affected.

Here is the explanation in the Data Guard log on racnode1:

ADD DATABASE

      05/29/2018 11:38:54

      DG_BROKER_START is not set to TRUE on member MYDB4. Set the parameter to TRUE and then try to add the member to the configuration.

 

So let's correct that:

29-05 11:43 MYDB4 SYS AS SYSDBA> alter system set dg_broker_start=TRUE;

System altered.

 

Now at 11:45AM:

DGMGRL> add database 'MYDB4' as connect identifier is MYDB4_DGMGRL maintained as physical;

Database "MYDB4" added

 

racnode1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 11:36 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 11:45 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

On machine1, one configuration file was altered, and the other is a backup of the previous state.

But on racnode2, both configuration files were modified (in fact, re-created), since this is the first time this new broker configuration is generating a change on racnode2 (remember, that broker configuration was created at 11:35AM on machine1 and so far had not done anything involving racnode2).

machine2@oracle::~ # ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

 

But the STANDBY broker log reports an error:

05/29/2018 11:44:50

      Site MYDB3 returned ORA-16603.

      MYDB4 version check failed; status = ORA-16603: detected configuration unique ID mismatch

            configuration will be deleted on this database

 

Do you remember that the broker of my STANDBY was not started when I ran CREATE CONFIGURATION on the PRIMARY?  So let's re-create the broker configuration while both brokers are started.

 

At 11:55AM:

DGMGRL> remove configuration

Warning: ORA-16620: one or more databases could not be reached for a delete operation

Removed configuration

 

machine1@oracle::~ # ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 29 11:55 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

As you can see, when you remove a broker configuration, there is no backup of the removed configuration since both files are reset (their size is 8K, while when there was a configuration, their size was 12K).

At that point, I had to move to another subject, and my colleague finished that broker configuration.  But on the following day (the 30th of May), I had to re-create it.

Here is the situation on the morning of May30:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 29 21:01 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

At 10:40AM, I removed the broker configuration from racnode1:

DGMGRL>  remove configuration;

Removed configuration

 

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/racnode1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:40 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

It seems it first connects to the other server (the files on that one were reset at 10:40, while the files on machine1 were reset 1 minute later).

 

Now let's create a new broker congiration, with MYDB3 as the PRIMARY:

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16584: operation cannot be performed on a standby database

Failed.

 

So let's go to the other server (machine2):

DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys

Password:

Connected as SYSDG.

 

DGMGRL> create configuration MYDB3dgconf as primary database is 'MYDB3' connect identifier is MYDB3_DGMGRL;

Error: ORA-16642: DB_UNIQUE_NAME mismatch

Failed.

This error occured at 10:44AM, and it did modify the configuration files:

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:44 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

But not on the STANDBY side (machine1):

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's create our configuration but with the right PRIMARY this time:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

As seen previously, when you create a configuration, both files are affected (but notice their difference in size):

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

But on the STANDBY side, nothing happened, as the STANDBY has not been added to the configuration yet:

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

 

Let's enable that configuration:

DGMGRL> enable configuration

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Again, one of the files contains the previous state of the configuration (2mn earlier, before the enablement).

 

Now, let's add a STANDBY to that configuration:

DGMGRL> add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Failed.

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 10:46 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 10:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

That error did not affect the configuration files.

 

Before we modify initialisation parameter log_archive_dest_3 (which is the cause of the ORA-16698), we must remove our configuration.  We will re-create it after that modification in our database.

DGMGRL> remove configuration;

Removed configuration

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 11:00 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

Of course, all files are reset.

Now let's modify that parameter in both databases:

alter system set log_archive_dest_3='' scope =both ;

 

And re-create our broker configuration:

DGMGRL> create configuration MYDB4dgconf as primary database is 'MYDB4' connect identifier is MYDB4_DGMGRL;

Configuration "MYDB4dgconf" created with primary database "MYDB4"

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 8.0K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle  12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

One file on machine2 contains our new configuration while the other one is a backup of the previous state (no configuration, hence 8K).  While, on machine1, nothing happened, but you already know that.

 

DGMGRL> enable configuration;

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:48 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

One file on machine2 contains our new configuration (the one dated 11:50AM) while the other one is a backup of the previous state.

 

At 11:52AM:

GMGRL>  add database 'MYDB3' as connect identifier is MYDB3_DGMGRL maintained as physical;

Database "MYDB3" added

 

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 11:50 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 11:52 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

-rw-rw---- 1 oracle oracle 8.0K May 30 10:41 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

The files on the STANDBY side have not yet been modifie, but you already know why.

 

At 12:02PM:

DGMGRL> enable database 'MYDB3';

Enabled.

 

machine2@oracle::~ #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

machine1@oracle:MYDB3:/app/machine1/oracle/adm/MYDB3 #  ls -rtlh /app/machine1/oracle/product/12.1.0/dbs/dr*3*

-rw-rw---- 1 oracle oracle 12K May 30 12:02 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 12:03 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

At last, the files on the STANDBY side are modified.

And in my PRIMARY, this ENABLE DATABASE has automatically filled in the log_archive_dest_3 parameter with "service="MYDB3_dgmgrl", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="MYDB3" net_timeout=30, valid_for=(online_logfile,all_roles)".

And in my STANDBY, this ENABLE DATABASE has automatically filled in the fal_server parameter "with MYDB4_dgmgrl" and the log_archive_config parameter has changed from "nodg_config" to "dg_config=(MYDB4)".

 

If I go to machine1 and run SHOW CONFIGURATION, everything is OK:

DGMGRL> show configuration

Configuration - MYDB4dgconf

  Protection Mode: MaxPerformance

  Members:

  MYDB4 - Primary database

    MYDB3 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 79 seconds ago)

 

Let's now perform a SWITCHOVER (after first running VALIDATE DATABASE for both our databases, to confirm the SWITCHOVER can go through):

DGMGRL> switchover to 'MYDB3' ;

Performing switchover NOW, please wait...

Operation requires a connection to instance "MYDB3" on database "MYDB3"

Connecting to instance "MYDB3"...

Connected as SYSDBA.

New primary database "MYDB3" is opening...

Operation requires start up of instance "MYDB4" on database "MYDB4"

Starting instance "MYDB4"...

ORACLE instance started.

Database mounted.

Switchover succeeded, new primary is "MYDB3"

 

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr2MYDB3.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:22 /app/machine1/oracle/product/12.1.0/dbs/dr1MYDB3.dat

machine2@oracle:MYDB4:/app/machine2/oracle/product/12.1.0/network/admin #  ls -rtlh /app/machine2/oracle/product/12.1.0/dbs/dr*4*

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr2MYDB4.dat

-rw-rw---- 1 oracle oracle 12K May 30 13:21 /app/machine2/oracle/product/12.1.0/dbs/dr1MYDB4.dat

Again, the configuration files on the other server were modified first.  But all 4 files have been modified, and 2 of them (one on either machines) contain the previous state of the broker configuration.

Oracle 12cR1. After getting this ORA-01187 :

11-04 17:55 MY12cDB SYS AS SYSDBA> select * from dba_temp_files;

select * from dba_temp_files

              *

ERROR at line 1:

ORA-01187: cannot read from file  because it failed verification tests

ORA-01110: data file 257: '/data/MY12cDB/ora3/DBL13/ETMPT01_01.dbf'

 

I decided to re-create the TEMPORARY TABLESPACE that this file belonged to, which happened to be the DEFAULT TEMPORARY TABLESPACE, named TEMP, of my 12cR1 database.

 

Here are the steps taken:

CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/data/MY12cDB/ora3/DBL13/TEMP2.dbf' SIZE 10M AUTOEXTEND ON  MAXSIZE 2048M;

alter database default temporary tablespace TEMP2;

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

drop tablespace temp including contents and datafiles;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/DBL13/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

alter database default temporary tablespace TEMP;

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

drop tablespace temp2 including contents and datafiles;

 

 

And here is the SQL*PLUS output:

 

11-04 17:55 MY12cDB SYS AS SYSDBA> select * from dba_temp_files;

select * from dba_temp_files

              *

ERROR at line 1:

ORA-01187: cannot read from file  because it failed verification tests

ORA-01110: data file 257: '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf'

 

 

11-04 18:02 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/data/MY12cDB/ora3/MY12cDB/TEMP2.dbf' SIZE 10M AUTOEXTEND ON  MAXSIZE 2048M;

 

Tablespace created.

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> alter database default temporary tablespace TEMP2;

 

Database altered.

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

 

PROPERTY_VALUE

--------------------------

TEMP2

 

 

11-04 18:04 MY12cDB SYS AS SYSDBA> col PROPERTY_VALUE for a10

11-04 18:04 MY12cDB SYS AS SYSDBA> drop tablespace temp including contents and datafiles;

 

Tablespace dropped.

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M

*

ERROR at line 1:

ORA-01119: error in creating database file '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf'

ORA-27038: created file already exists

Additional information: 1

 

 

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> !ls -lh /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

-rw-rw---- 1 oracle oracle 257M Jan 26 10:05 /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

 

 

11-04 18:05 MY12cDB SYS AS SYSDBA> !rm /data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf' SIZE 100M AUTOEXTEND ON  MAXSIZE 2048M;

 

Tablespace created.

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> alter database default temporary tablespace TEMP;

 

Database altered.

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

 

PROPERTY_V

----------

TEMP

 

 

11-04 18:07 MY12cDB SYS AS SYSDBA> drop tablespace temp2 including contents and datafiles;

 

Tablespace dropped.

 

 

11-04 18:08 MY12cDB SYS AS SYSDBA>  select * from dba_temp_files;

 

FILE_NAME

   FILE_ID TABLESPACE_NAME         BYTES     BLOCKS STATUS       RELATIVE_FNO AUTOEXTENSIB   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS

/data/MY12cDB/ora3/MY12cDB/ETMPT01_01.dbf

         1 TEMP                104857600      12800 ONLINE                  1 YES          2147483648     262144            1  103809024       12672

 

To summarize:

  1. Create a new temporary tablespace TEMP2
  2. Make TEMP2 the default temporary tablespace
  3. Drop your original tablespace TEMP
  4. Create a new temporary tablespace TEMP
  5. Make TEMP the default temporary tablespace
  6. Drop TEMP2

And your default temporary tablespace is TEMP, as required, and has a brand new temp file that will not cause ORA-01187.

RacOneNode 11.2.0.4. Let's follow what occurs when we relocate a RacOneNode database from one node to another. Reminder: in RacOneNode, only one instance is up at a time.

 

Situation before the RELOCATE:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST1 is running on node node1

Online relocation: INACTIVE

 

On our "node1" server, let's launch the RELOCATE (from node1 to node2):

oracle@node1:/home/oracle $ srvctl relocate database -d RAC1NODE_DB -n node2

 

On the other node, let's monitor the RELOCATE with SRVCTL STATUS DATABASE:

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node1

Instance RAC1NODE_INST1 is running on node node1

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Here we can see that for a moment, while online relocation is active, 2 instances are up. This does not last long however.

 

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Soon, while online relocation is still active, our database is said to be running on the destination node, and the first instance is now down.

 

Looking at the alert.log on node 1 (the instance that is going to be shut down):

2018-03-13 02:00:00.005000 +01:00

Closing scheduler window

Restoring Resource Manager plan DEFAULT_PLAN via scheduler window

Setting Resource Manager plan DEFAULT_PLAN via parameter

2018-03-13 03:10:51.311000 +01:00

Stopping background process CJQ0

2018-03-13 16:23:17.464000 +01:00

Reconfiguration started (old inc 2, new inc 4)

List of instances:

1 2 (myinst: 1)

Global Resource Directory frozen

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Submitted all GCS remote-cache requests

Fix write in gcs resources

Reconfiguration complete

2018-03-13 16:23:20.133000 +01:00

minact-scn: Master returning as live inst:2 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:27.330000 +01:00

ALTER SYSTEM SET service_names='RAC1NODE_DB' SCOPE=MEMORY SID='RAC1NODE_INST1';

Shutting down instance (transactional local)

Stopping background process SMCO

2018-03-13 16:23:28.504000 +01:00

Shutting down instance: further logons disabled

Stopping background process QMNC

2018-03-13 16:23:34.536000 +01:00

Stopping background process MMNL

2018-03-13 16:23:35.538000 +01:00

Stopping background process MMON

2018-03-13 16:23:36.540000 +01:00

Local transactions complete. Performing immediate shutdown

License high water mark = 45

2018-03-13 16:23:39.663000 +01:00

ALTER SYSTEM SET _shutdown_completion_timeout_mins=30 SCOPE=MEMORY;

ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

SMON: disabling tx recovery

Stopping background process RCBG

2018-03-13 16:23:41.737000 +01:00

SMON: disabling cache recovery

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 16

Redo thread 1 internally disabled at seq 2772 (LGWR)

Shutting down archive processes

Archiving is disabled

ARCH shutting down

ARCH shutting down

ARCH shutting down

ARC2: Archival stopped

ARC0: Archival stopped

ARC3: Archival stopped

ARC1: Archiving disabled thread 1 sequence 2772

Archived Log entry 2792 added for thread 1 sequence 2772 ID 0xde13bfb6 dest 1:

ARCH shutting down

ARC1: Archival stopped

NOTE: Deferred communication with ASM instance

2018-03-13 16:23:42.816000 +01:00

Thread 1 closed at log sequence 2772

Successful close of redo thread 1

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 4

Completed: ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

Shutting down archive processes

Archiving is disabled

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 2

Completed: ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: force a map free for map id 2

NOTE: force a map free for map id 4

NOTE: force a map free for map id 16

2018-03-13 16:23:44.226000 +01:00

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: Shutting down MARK background process

Stopping background process VKTM

NOTE: force a map free for map id 4684

NOTE: force a map free for map id 4683

2018-03-13 16:23:45.900000 +01:00

freeing rdom 0

2018-03-13 16:23:48.283000 +01:00

Instance shutdown complete

 

and the alert.log on node 2 (the instance that is started):

2018-03-13 16:23:09.103000 +01:00

Starting ORACLE instance (normal)

...

2018-03-13 16:23:37.540000 +01:00

minact-scn: Inst 2 is now the master inc#:4 mmon proc-id:3931 status:0x7

minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000

minact-scn: Master returning as live inst:1 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:46.352000 +01:00

Reconfiguration started (old inc 4, new inc 6)

List of instances:

2 (myinst: 2)

Global Resource Directory frozen

* dead instance detected - domain 0 invalid = TRUE

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Post SMON to start 1st pass IR

Instance recovery: looking for dead threads

Submitted all GCS remote-cache requests

Post SMON to start 1st pass IR

Fix write in gcs resources

Starting background process CJQ0

Reconfiguration complete

 

End result:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: INACTIVE

We are back to normal: online relocation is over so inactive, only one instance is up and it's the one on node2.

I needeed to increase the size of a TEMPORARY tablespace in a STANDBY database.  Hey, it's a standby database, so alterations must be made to the primary database, then DataGuard will propagate those changes to the standby database.

So I went to the primary database and ran:

14:00:09 22-01 14:00 SYS > alter database tempfile '/appli/oracle/.../oradata02/****_temp_01.dbf'  AUTOEXTEND ON NEXT 10M MAXSIZE 13G  ;

 

Database altered.

180122alterMAXISEofTEMP.JPG

 

On the standby database, I was monitoring its alert log with

adrci> show alert -tail -f

waiting for a message notifying me of the change on the temp file.

To trigger off that change, I switched logfiles on the primary to create a new archived log:

14:02:37 22-01 14:02 PRIMARYDB SYS > alter system switch logfile ;

 

System altered.

 

The alert log of the standby did mention applying the newly received archived logs but my temp file on the standby did not budge.  Since all I did on the primary was to alter the MAXSIZE of my temp file, I tried something more telling:

14:35:29 22-01 14:35 PRIMARYDB SYS > ALTER DATABASE TEMPFILE '/appli/oracle/..../oradata02/****_temp_01.dbf' RESIZE 7G;

 

Database altered.

180122RESIZEofTEMP.JPG

Again, I did alter system switch logfile  to propagate the change to the standby, but to no avail.

 

I googled for that phenomenon and read on a DBI-SERVICES blog post that changes to temp files on the primary are not propagated to the standby because they do not generate redo.  So I logged out of SQL+ and started a brand-new session, in which I checked the REDO that I generate (with V$MYSTAT)  when altering my temp file:

 

15:56:17 PRIMARYDB SYS AS SYSDBA > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo entries                                   0

redo size                                      0

redo entries for lost write detection          0

redo synch poll writes                         0

redo writes                                    0

redo synch writes                              0

redo size for direct writes                    0

 

15:56:26 PRIMARYDB SYS AS SYSDBA > alter database tempfile '/appli/oracle/..../oradata02/****_temp_01.dbf' autoextend on NEXT 10M MAXSIZE 11G  ;

 

Database altered.

 

15:56:37 PRIMARYDB SYS AS SYSDBA > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo size                                    816

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

15:56:43  SYS AS SYSDBA > alter system switch logfile ;

SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

System altered.

 

15:56:59 PRIMARYDB SYS AS SYSDBA >

 

STATNAME                                   VALUE

-------------------------------------- ---------

redo size                                    816

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

15:57:00 PRIMARYDB SYS AS SYSDBA > select BYTES/1024/1024,MAXBYTES/1024/1024,STATUS,USER_BYTES/1024/1024 UsedMB  from dba_temp_files ;

 

BYTES/1024/1024 MAXBYTES/1024/1024 STATUS                    USEDMB

--------------- ------------------ --------------------- ----------

           8192              11264 ONLINE                      8191

 

That test shows that on my 11.2.0.2 database, my alter database tempfile does generate redo (3 redo entries)

 

27-02 12:27 PRIMARYDB SYS > alter database tempfile '/appli/oracle/..../oradata02/****_temp_01.dbf' resize 8g;

 

Database altered.

 

27-02 12:30 PRIMARYDB SYS > SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

---------------------------------------- -------

redo size                                    792

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes

 

27-02 12:31 PRIMARYDB SYS >  alter system switch logfile ;

 

System altered.

 

27-02 12:32 PRIMARYDB SYS >  SELECT NAME statname,VALUE FROM V$MYSTAT NATURAL JOIN V$STATNAME WHERE NAME LIKE 'redo%entr%' OR NAME LIKE '%redo%size' or name like '%redo%writes' order by value desc;

 

STATNAME                                   VALUE

---------------------------------------- -------

redo size                                    792

redo entries                                   3

redo synch writes                              1

redo writes                                    0

redo synch poll writes                         0

redo entries for lost write detection          0

redo size for direct writes                    0

 

 

ALTER TABLESPACE TEMP ADD TEMPFILE '/appli/oracle/..../oradata02/****_temp_02.dbf' SIZE 2M ; => this works even though the STANDBY is opened READ ONLY WITH APPLY.

And this generates no redo (no wonder, it's open READ-ONLY):

STATNAME                                           VALUE

-------------------------------------------- -----------------

redo entries                                           0
redo size                                              0
redo entries for lost write detection                  0
redo synch poll writes                                 0
redo writes                                            0
redo synch writes                                      0
redo size for direct writes                            0

 

 

But to add a temporary tablespace, you must first create it on the PRIMARY:

27-02 16:21 PRIMARYDB SYS > CREATE TEMPORARY TABLESPACE "TEMP2" TEMPFILE '/appli/oracle/..../oradata02/****_temp2_01.dbf' SIZE 4M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K ;

Tablespace created.

 

27-02 16:36 PRIMARYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          8192 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP2            4 /appli/oracle/..../oradata02/****_temp2_01.dbf

 

On the STANDBY, you will discover that there is no new TEMPFILE:

27-02 16:23 STANDBYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

 

But that a new TEMPORARY TABLESPACE does exist:

27-02 16:39 STANDBYDB SYS >  select contents,STATUS,TABLESPACE_NAME   from dba_tablespaces where tablespace_name like 'TEMP%';

 

CONTENTS                    STATUS       TABLESP

--------------------------- ------------ -------

TEMPORARY                   ONLINE       TEMP

TEMPORARY                   ONLINE       TEMP2

 

So you must ADD a TEMPFILE to that empty TEMPORARY TABLESPACE (yes yes, that 's what Oracle says) :

27-02 16:42 STANDBYDB SYS > ALTER TABLESPACE temp2 ADD TEMPFILE '/appli/oracle/..../oradata02/****_temp2_01.dbf' SIZE 4M AUTOEXTEND ON MAXSIZE 20g;

 

Tablespace altered.

 

27-02 16:44 STANDBYDB SYS > select autoextensible auto,tablespace_name,bytes/1024/1024 "Mo",file_name from dba_temp_files;

 

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP             2 /appli/oracle/..../oradata02/****_temp_02.dbf

YES       TEMP2            4 /appli/oracle/..../oradata02/****_temp2_01.dbf

 

Nevertheless, if you DROP that new TEMPORARY TABLESPACE on the PRIMARY:

27-02 16:39 PRIMARYDB SYS > drop tablespace temp2 including contents and datafiles ;

Tablespace dropped.

 

It does reflect immediately on the STANDBY:

27-02 16:44 STANDBYDB SYS > /

 

AUTO      TABLESP         Mo FILE_NAME

--------- ------- ---------- ----------------------------------------------------------------------

NO        TEMP          5120 /appli/oracle/..../oradata02/****_temp_01.dbf

NO        TEMP             2 /appli/oracle/..../oradata02/****_temp_02.dbf

 

And while the the PRIMARY ALERT.LOG says:

2018-02-27 16:48:53.648000 +01:00

drop tablespace temp2 including contents and datafiles

Deleted file /appli/oracle/..../oradata02/****_temp2_01.dbf

Completed: drop tablespace temp2 including contents and datafiles

180228ADDTEMPFILEdansALERTLOG.JPG

The STANDBY ALERT.LOG says it differently:

2018-02-27 16:48:55.800000 +01:00

Deleted file /appli/oracle/..../oradata02/*_temp2_01.dbf

Recovery dropped temporary tablespace 'TEMP2'

If you need a specific date format in the Oracle session of a program that you are not running interactively (and therefore, you cannot type “ALTER SESSION SET NLS_DATE_FORMAT = "DD-MM hh24:mi" ;”), you can define it in the shell that executes your program.

Here is a simple demonstration with SQL+.

Let’s create a tdform.ksh shell script that will run SQL+ 4 times, with NLS_DATE_FORMAT defined in 4 different ways.

 

#!/bin/ksh

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro nothing defined

select sysdate from dual ;

exit;

EOF

 

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

alter session set nls_date_format = "DD-MM-RR HH24:MI:SS" ;

pro NLS_DATE_FORMAT forced with an ALTER SESSION

select sysdate from dual ;

exit;

EOF

 

export NLS_DATE_FORMAT="YY-MM-DD hh24:mi"

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro NDF exported in SHELL as YY-MM-DD hh24:mi

select sysdate from dual ;

exit;

EOF

 

export NLS_DATE_FORMAT="DD-MM-RR hh24:mi:ss"

sqlplus -s / as sysdba  << EOF

set pagesize 0 echo off lin 300

pro NDF exported in SHELL French format with seconds

select sysdate from dual ;

exit;

EOF

exit

 

Then let’s run that shell script and see if the format of “sysdate” varies each time.

 

oracle@mylinuxserver:/home/oracle $ ./tdform.ksh

nothing defined

29-JAN-18

 

Session altered.

 

NLS_DATE_FORMAT forced with an ALTER SESSION

29-01-18 17:02:04

 

NDF exported in SHELL as YY-MM-DD hh24:mi

18-01-29 17:02

 

NDF exported in SHELL French format with seconds

29-01-18 17:02:04

 

As demonstrated here, SQL+ inherits the NLS_DATE_FORMAT defined in the shell that executes it.  This is especially useful with other programs that do not allow you to run “ALTER SESSION SET NLS_DATE_FORMAT”, such as impdp, RMAN….

For the past few years, I have been going to a number of Oracle User Group conferences.  They are a great way to learn a lot in a short time-frame.  Also the Oracle community is made up of nice, knowledgeable people who are willing to share their experience.  Of course, you can have your company pay the fee for attending.  But if you volunteer (as a session chair or a speaker), you get to attend the whole event for free.

So I was in Birmingham from December 3 to December 6 for TECH17, the annual conference of one of the biggest Oracle User Groups in Europe: UKOUG (8500 people).  I gave a presentation on Securing the Oracle database listener (the network part of the Oracle database) on Sunday the 3rd in front of about 25 people, and another presentation on Wednesday the 6th in front of about 15 people.  That second presentation was of new kind: in collaboration with UKOUG, I gave a 4-question quizz that attendees could take part in via the UKOUG smartphone application. Their answers were streamed live over the Internet to a web site that I displayed on screen to as to see which answers received the most votes and discuss the various possibilities (and the right answer!).

Most of all, I attended quality presentations on topics of interest: PL/SQL best practices, the future of Oracle open-source tools, Paas deployments, the Exadata Cloud Machine (or Exadata on-premises), Make my Database Go Faster, how to use AWR reports, Exadata patching, Oracle Cloud trial, Docker for DBAs, Ansible and Oracle, the Cloud for developers, escape from Exadata, automate database patching, the Oracle Scheduler, DBA tools, Exadata snapshot databases, Exadata performance health check, Dataguard ideas…

I also got to discuss some issues I am encountering at work with helpful experts.  And I chatted about the work of a DBA with fellow DBAs.  All in all, time well-spent – thanks to Neurones IT for sending me there.

171206FALDQ.jpg

 

I know that other technologies have their own conferences too.  Consider participating in those community events and be at the cutting-edge of your field.  Better yet, consider getting involved, and give your career a boost.

Dear DBA Frank

TECH17 Is Coming Up

Posted by Dear DBA Frank Dec 1, 2017

In the wee hours of Sunday the 3rd

I shall soar as a bird

Hoping not to be smashed to smithereens

On the UK-bound flight to TECH17

 

More seriously, yes I am going to TECH17.  For 2 reasons this time: teach and learn.

 

ImspeakingatTECH17.jpg

Teach

I will give 2 presentations.  The first one is... the first one on SuperSunday.It's about a paramount subject: the security of the Oracle database listener, which is the part of the Oracle database that controls remote access to your databases.  So it had better be secure!  I will share some features that are simple to implement and very effective.

The second presentation will be on Wednesday at 2:45PM.  It's a one of a kind presentation: we are going to avail ourselves of smartphone and internet technology to have a bit of fun with a quizz that you can participate in with your smartphone and learn or review interesting things to know about the Oracle database (hush hush, the questions are top-secret!).

 

Learn

I am looking forward to listening to plenty of other great presentations.  I also would like to discuss to issues I am faced with at work.

1. how to deliver/provision hundreds of Oracle database (11gR2 or 12cR1) as fast as possible

2. how to refresh a QA primary database with data from production without breaking its Dataguard standby databases

 

If you have ideas or experience on these two issues, please talk to me during the conference.  You will find me with the TECH17 smartphone app or on Twitter.

After looking up the various Oracle VirtualBox images that Oracle makes freely available, I decided that the Oracle Data Integrator 12c VM was cool as it could meet several of my needs.  It features:

  • Oracle Enterprise Linux (64-bit) version 6.4 with unbreakable linux kernel 2.6.39-400.17.1.el6uek.x86_64
  • Oracle EE Database 11.2.0.4
  • Oracle Data Integrator 12.2.1.3
  • Oracle GoldenGate 12.1.2.0.0

 

To obtain it, you must accept the  licence agreement, then download 5 zip files:

Then open file 001 with your zip program:

 

Once uncompressed, you should get one 9GB .ova file:

 

which you must then import as an appliance into VirtualBox:

 

When you import the appliance, change a few settings.

 

First rename the virtual machine to a short name with no spaces: you will be thankful when you write scripts and when using VBoxManage (the VirtualBox CLI). Modify the description to mention the date you imported that appliance (always good to know how old your VMs are).  Modify the path to the hard drive files (the .vmdk files): make it short and without spaces. Again you will be thankful at the command line.  Lower its RAM requirements (you can always increase that later) but make sure you make the VM use at least 2 of your processors (this is Oracle software you know!). Make sure you add a shared folder to your VM:

 

And last but not least, in "Network", click on "Advanced" then on "port forwarding" :

Then add a line such as this, with a port of your choice that's redirected to port 1521, the port of the listener on your VM,

so as to access the Oracle database that's on your VM from your host!

C:\Oracle\sqldev\sqlcl\bin\sql.bat system/oracle@//localhost:1530/orcl

Let's boldly go where no DBA has gone before... and follow this procedure: https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-suse

 

zypper install -y mssql-server

 

 

Maryland:/home # zypper addrepo -fc https://packages.microsoft.com/config/sles/12/mssql-server.repo

Adding repository 'packages-microsoft-com-mssql-server' ..........................................[done]

Repository 'packages-microsoft-com-mssql-server' successfully added

 

URI         : https://packages.microsoft.com/sles/12/mssql-server

Enabled     : Yes                                          

GPG Check   : Yes                                          

Autorefresh : Yes                                          

Priority    : 99 (default priority)                        

 

Repository priorities are without effect. All enabled repositories share the same priority.

Maryland:/home # zypper --gpg-auto-import-keys refresh

Repository 'Main Repository (NON-OSS)' is up to date.                                             

Repository 'Update Repository (Non-Oss)' is up to date.                                           

Repository 'Main Repository (OSS)' is up to date.                                                 

Repository 'Main Repository (Sources)' is up to date.                                             

Retrieving repository 'Main Update Repository' metadata ..........................................[done]

Building repository 'Main Update Repository' cache ...............................................[done]

Retrieving repository 'multimedia:apps' metadata ----------------------------------------------------[/]

 

Automatically importing the following key:

 

  Repository:       multimedia:apps                                           

  Key Name:         multimedia:apps OBS Project <multimedia:apps@build.opensuse.org>

  Key Fingerprint:  78AB2296 61D3830A 08939082 868E7FC4 E95B46D4              

  Key Created:      Sat 29 Jul 2017 10:43:51 CEST                             

  Key Expires:      Mon 07 Oct 2019 10:43:51 CEST                             

  Rpm Name:         gpg-pubkey-e95b46d4-597c4ac7                              

 

 

Retrieving repository 'multimedia:apps' metadata .................................................[done]

Building repository 'multimedia:apps' cache ......................................................[done]

Retrieving repository 'multimedia:libs' metadata .................................................[done]

Building repository 'multimedia:libs' cache ......................................................[done]

Repository 'libdvdcss repository' is up to date.                                                  

Retrieving repository 'Packman Repository' metadata ..............................................[done]

Building repository 'Packman Repository' cache ...................................................[done]

Retrieving repository 'packages-microsoft-com-mssql-server' metadata --------------------------------[\]

 

Automatically importing the following key:

 

  Repository:       packages-microsoft-com-mssql-server              

  Key Name:         Microsoft (Release signing) <gpgsecurity@microsoft.com>

  Key Fingerprint:  BC528686 B50D79E3 39D3721C EB3E94AD BE1229CF     

  Key Created:      Thu 29 Oct 2015 00:21:48 CET                     

  Key Expires:      (does not expire)                                

  Rpm Name:         gpg-pubkey-be1229cf-5631588c                     

 

 

Retrieving repository 'packages-microsoft-com-mssql-server' metadata .............................[done]

Building repository 'packages-microsoft-com-mssql-server' cache ..................................[done]

All repositories have been refreshed.

Maryland:/home # zypper install -y mssql-server

Loading repository data...

Reading installed packages...

Resolving package dependencies...

 

The following 2 NEW packages are going to be installed:

  libsss_nss_idmap0 mssql-server

 

2 new packages to install.

Overall download size: 165.4 MiB. Already cached: 0 B. After the operation, additional 165.4 MiB will be

used.

Continue? [y/n/...? shows all options] (y): y

Retrieving package libsss_nss_idmap0-1.13.4-5.3.1.x86_64           (1/2),  32.1 KiB ( 18.4 KiB unpacked)

Retrieving: libsss_nss_idmap0-1.13.4-5.3.1.x86_64.rpm ............................................[done]

Retrieving package mssql-server-14.0.900.75-1.x86_64               (2/2), 165.4 MiB (165.4 MiB unpacked)

Retrieving: mssql-server-14.0.900.75-1.x86_64.rpm ....................................[done (3.2 MiB/s)]

Checking for file conflicts: .....................................................................[done]

(1/2) Installing: libsss_nss_idmap0-1.13.4-5.3.1.x86_64 ..........................................[done]

(2/2) Installing: mssql-server-14.0.900.75-1.x86_64 ..............................................[done]

Maryland:/home

 

fd@Maryland:/home> sudo /opt/mssql/bin/mssql-conf setup

Locale en_GB not supported. Using en_US.

The license terms for this product can be found in

/usr/share/doc/mssql-server or downloaded from:

https://go.microsoft.com/fwlink/?LinkId=852741

 

The privacy statement can be viewed at:

https://go.microsoft.com/fwlink/?LinkId=853010

 

Do you accept the license terms? [Yes/No]:Yes

 

 

I chose "2) Developer (free, no production use rights)"  French   sa and a password that meets the following requirements:

Regarding the "days left in the evaluation period", the procedure reassures us: "This message does not reflect the edition you selected. It relates to the preview period for RC2."

 

If I look at Ksysguard, I see that a chunk of my laptop's RAM is now being used by an unexpected process on a Linux machine.

Let's explore this new piece of sofware:

fd@Maryland:/home> systemctl status mssql-server

● mssql-server.service - Microsoft SQL Server Database Engine

   Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; enabled; vendor preset: disabled)

   Active: active (running) since Fri 2017-09-01 19:03:56 CEST; 6min ago

     Docs: https://docs.microsoft.com/en-us/sql/linux

Main PID: 6620 (sqlservr)

   CGroup: /system.slice/mssql-server.service

           ├─6620 /opt/mssql/bin/sqlservr

           └─6645 /opt/mssql/bin/sqlservr

fd@Maryland:/home> ps aux | grep 6620

mssql     6620  0.0  0.2  89060 22604 ?        Ssl  19:03   0:00 /opt/mssql/bin/sqlservr

 

But we've only installed the database engine -- can't do much with that!  the procedure https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-suse  continues to explain how to install the tools: "to create a database, you need to connect with a tool that can run Transact-SQL statements on the SQL Server. The following steps install the SQL Server command-line tools: sqlcmd and bcp."

sudo zypper addrepo -fc https://packages.microsoft.com/config/sles/12/prod.repo

sudo zypper --gpg-auto-import-keys refresh

sudo zypper --gpg-auto-import-keys refresh

 

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

source ~/.bashrc

The simplicity of SQLServer at work:

1> create database oddsandends

2> go

1> SELECT Name from sys.Databases

2> go

Name                                                                                                                       

--------------------------------------------------------------------------------------------------------------------------------

master                                                                                                                     

tempdb                                                                                                                     

model                                                                                                                      

msdb                                                                                                                       

oddsandends

 

1> INSERT INTO meals  VALUES (1, 'lunch','cheese', 120,'g',CURRENT_TIMESTAMP)

2> go

 

(1 rows affected)

1> select * from meals

2> go

id          meal_type            food                                               quantity    unit                 moment     

----------- -------------------- -------------------------------------------------- ----------- -------------------- ----------------

          1 lunch                broccoli                                                   150 g                          2017-09-01

          1 lunch                cheese                                                     120 g                          2017-09-01

 

(2 rows affected)

 

2> go

SELECT   CASE      WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '8%' THEN 'SQL2000' WHEN CONVERT(VARCHAR(128),SERVERPROPERTY ('productversion')) like '9%' THEN 'SQL2005' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '10.0%' THEN 'SQL2008' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '10.5%' THEN 'SQL2008 R2' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '11%' THEN 'SQL2012' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '12%' THEN 'SQL2014' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '13%' THEN 'SQL2016' ELSE 'unknown' END AS MajorVersion,   SERVERPROPERTY('ProductLevel') AS ProductLevel,SERVERPROPERTY('Edition') AS Edition,SERVERPROPERTY('ProductVersion') AS ProductVersion ;

 

MajorVersion ProductLevel                             Edition                                  ProductVersion                      

------------ ---------------------------------------- ---------------------------------------- ----------------------------------------

unknown      RC2                                      Developer Edition (64-bit)               14.0.900.75

 

But:

1> :serverlist

Server list is currently unavailable in the Linux version.

I love SQLcl but one must admit that it’s not widely available yet at customers’. So getting the hang of good ol’ SQL*Plus is still a must-have skill for Oracle DBAs. Here’s an interesting challenge, and an easy solution.

 

I’m selecting the lines that were input in the last 24 hours :

select * from nit.spri where moment > sysdate-1/24 order by moment ;

MOMENT NUM NOTES

----------- ---------- --------------------------------------------------

24-07 13:50 100 insertion auto via KSH

24-07 13:50 101 insertion auto via KSH

24-07 13:50 102 insertion auto via KSH

24-07 13:50 103 insertion auto via KSH

24-07 13:50 104 insertion auto via KSH

24-07 13:50 105 insertion auto via KSH

24-07 13:51 106 insertion auto via KSH

24-07 13:51 107 insertion auto via KSH

24-07 13:51 108 insertion auto via KSH

24-07 13:51 100 insertion auto via KSH

24-07 14:00 200 insertion auto via KSH

24-07 14:00 201 insertion auto via KSH

24-07 14:00 202 insertion auto via KSH

24-07 14:00 203 insertion auto via KSH

24-07 14:00 204 insertion auto via KSH

24-07 14:00 205 insertion auto via KSH

24-07 14:00 206 insertion auto via KSH

24-07 14:06 200 insertion auto via KSH

24-07 14:06 201 insertion auto via KSH

24-07 14:06 202 insertion auto via KSH

24-07 14:06 203 insertion auto via KSH

24-07 14:06 204 insertion auto via KSH

24-07 14:06 205 insertion auto via KSH

24-07 14:06 206 insertion auto via KSH

24-07 14:06 207 insertion auto via KSH

24-07 14:07 208 insertion auto via KSH

24-07 14:07 209 insertion auto via KSH

24-07 14:10 400 insertion auto via KSH

24-07 14:10 401 insertion auto via KSH

24-07 14:10 402 insertion auto via KSH

24-07 14:10 403 insertion auto via KSH

24-07 14:10 404 insertion auto via KSH

24-07 14:10 405 insertion auto via KSH

24-07 14:10 406 insertion auto via KSH

24-07 14:10 407 insertion auto via KSH

24-07 14:10 408 insertion auto via KSH

24-07 14:13 400 insertion auto via KSH

24-07 14:13 401 insertion auto via KSH

24-07 14:13 402 insertion auto via KSH

24-07 14:13 403 insertion auto via KSH

24-07 14:13 404 insertion auto via KSH

24-07 14:13 405 insertion auto via KSH

24-07 14:13 406 insertion auto via KSH

24-07 14:14 407 insertion auto via KSH

24-07 14:14 408 insertion auto via KSH

24-07 14:14 409 insertion auto via KSH

24-07 14:15 500 insertion auto via KSH

24-07 14:15 501 insertion auto via KSH

24-07 14:15 502 insertion auto via KSH

24-07 14:15 503 insertion auto via KSH

24-07 14:15 504 insertion auto via KSH

24-07 14:15 505 insertion auto via KSH

24-07 14:15 506 insertion auto via KSH

24-07 14:15 507 insertion auto via KSH

24-07 14:17 600 insertion auto via KSH

24-07 14:22 601 insertion auto via KSH


 

56 rows selected.


56 rows is too many for my need, so I decide to select the lines that were input in the last 20 minutes. This means that I must change

where moment > sysdate-1/24

to

where moment > sysdate-20/1440


As you probably know, in SQL*Plus, a character string can be changed into another with the change keyword (c for short):

 

select log_mode,open_mode,status from v$database

*

ERROR at line 1:

ORA-00904: "STATUS": invalid identifier

 

SQL> c/status/db_unique_name

1* select log_mode,open_mode,db_unique_name from v$database

SQL> /

LOG_MODE OPEN_MODE DB_UNIQUE_NAME

------------ -------------------- ------------------------------

NOARCHIVELOG READ WRITE orcl


 

As you can see in the example above, the typical field separator is a slash. So how am I going to turn  sysdate-1/24 into sysdate-20/1440 ?

Fortunately, SQL*Plus allows for any field separator you like. Let’s choose an exclamation point:

 

>-c!1/24!20/1440

  1* select * from nit.spri where moment > sysdate-20/1440 order by moment

  >-/

MOMENT NUM NOTES

----------- ---------- --------------------------------------------------

24-07 14:06 200 insertion auto via KSH

24-07 14:06 201 insertion auto via KSH

24-07 14:06 202 insertion auto via KSH

24-07 14:06 203 insertion auto via KSH

24-07 14:06 204 insertion auto via KSH

24-07 14:06 205 insertion auto via KSH

24-07 14:06 206 insertion auto via KSH

24-07 14:06 207 insertion auto via KSH

24-07 14:07 208 insertion auto via KSH

24-07 14:07 209 insertion auto via KSH

24-07 14:10 400 insertion auto via KSH

24-07 14:10 401 insertion auto via KSH

24-07 14:10 402 insertion auto via KSH

24-07 14:10 403 insertion auto via KSH

24-07 14:10 404 insertion auto via KSH

24-07 14:10 405 insertion auto via KSH

24-07 14:10 406 insertion auto via KSH

24-07 14:10 407 insertion auto via KSH

24-07 14:10 408 insertion auto via KSH

24-07 14:13 400 insertion auto via KSH

24-07 14:13 401 insertion auto via KSH

24-07 14:13 402 insertion auto via KSH

24-07 14:13 403 insertion auto via KSH

24-07 14:13 404 insertion auto via KSH

24-07 14:13 405 insertion auto via KSH

24-07 14:13 406 insertion auto via KSH

24-07 14:14 407 insertion auto via KSH

24-07 14:14 408 insertion auto via KSH

24-07 14:14 409 insertion auto via KSH

24-07 14:15 500 insertion auto via KSH

24-07 14:15 501 insertion auto via KSH

24-07 14:15 502 insertion auto via KSH

24-07 14:15 503 insertion auto via KSH

24-07 14:15 504 insertion auto via KSH

24-07 14:15 505 insertion auto via KSH

24-07 14:15 506 insertion auto via KSH

24-07 14:15 507 insertion auto via KSH

24-07 14:17 600 insertion auto via KSH

24-07 14:22 601 insertion auto via KSH


 

39 rows selected.


 

Now I’ve got 39 rows instead of 56 and I’m happy: mostly because I know how to change a character string with a slash in it!


 

NOTE: this does not work in SQLcl:

 

05-09 08:23 dsvm1644 SYSTEM > c/status/sql_idSP2-0023: String not found
17095doesnotworkinSQLcl.jpeg

 


 

Ljubljana, Slovenia, May 23rd: I gave 2 presentations to about 30 members of SIOUG:

 

20170523_162554.jpgWP_20170523_10_58_53_Pro.jpg

 

Oddity: I got to stay in a hangar with carboard/wooden boxes as offices/bedrooms inside:20170523_080646.jpg I recommend it!

 

 

Madrid, Spain, May 24: Speakers' Dinner: 20170524_220125.jpg

May 25: Technical SPOUG Day, in Oracle's Madrid offices: 20170525_154140.jpg

I did my "Performance Tuning with SQLDeveloper" live demo in front of about 40 people: MadridMay25DearDBA.jpg

 

May 30-31st, Dusseldorf: I gave my "DBAs, Secure Your Listener" talk to about 50 people at DOAG Databank : 20170530_130252.jpg20170530_131557.jpg

 

Glasgow, United Kingdom, June 21st: during UKOUG's Scotland 2017 event,ougscot17.gif

I did my "Performance Tuning with SQLDeveloper live demo" in front of about 15 people. 20170621_135744.jpg