Skip navigation
1 2 3 Previous Next

Database Support Blog

68 posts



Extended Support Fee Waiver for Oracle Database 12.1 and 11.2 for Oracle E-Business Suite

Due to the certification timing of Oracle E-Business Suite with Oracle Database versions beyond 12.1, Oracle has waived Extended Support fees for Oracle Database 12.1 and 11.2 for Oracle Database licenses used for Oracle E-Business Suite through December 2020.


This Extended Support fee waiver will be specific to the Oracle Database licenses, within your company’s footprint, that you use for running Oracle E-Business Suite software.


If you have already paid your Oracle Database Extended Support fees, you may be eligible to receive a credit that we will apply against your Oracle account.


The Lifetime Support section of the Oracle Software Technical Support Policies documents the Extended Support fee waiver. To contact Oracle and determine your eligibility for either the Extended Support fee waiver or credit for previously paid Extended Support fees, you can either access your on-premises Oracle Premier Support Renewal information via My Support Renewals using your Oracle Single Sign-On and log a service request or, alternatively, you can contact Oracle Support Renewals using the Oracle Global Contacts page.


Oracle has been a trusted resource for our customers for over 40 years. We always strive to provide innovations that help your business succeed and help you keep your systems secure. We look forward to our continuing relationship.

What's New in Trace File Analyzer 19.2

For details of previous version history Download the TFA Version History


New features in TFA 19.2:

Temporarily restrict automatic collections for specific events


  • dbracinst: Oracle RAC-related data collection for Oracle Database problems
  • gridinfra: Oracle RAC-related data collection for Oracle Clusterware problems
  • gridinfrainst: Data collection for Oracle RAC upgrade and patching problems
  • exsmartscan: Oracle Exadata - Smart Scan not working problems
  • zlgeneric: Data collection for Zero Data Loss Recovery Appliance (ZDLRA) problems


Event specific collection for default diagnostic collection


The default diagcollect has been enhanced to make it easier to collect the most useful diagnostics.

When you run a default diagnostic collection it will now present you with a list of recent events to optionally collect for.



Option to restrict the collection of excessively large files


Prevent excessively large files from delaying or stalling collections.

Run tfactl set maxfilecollectionsize for the diagnostic collection command to consider only the last 200 KB for the files that are larger than the size specified.


For more information on the TFA Collector, see TFA Collector - TFA with Database Support Tools Bundle (Note 1513912.1)

Upgrading an Oracle Database While Performing A Migration (Cross Platform or Within the Same OS)


This article will explain process of upgrading the database during the migration process. Performing migration and upgrade separately will take considerably long time, but by following correct process you should be able to perform both activities together.


Difference between upgrade and migration

Upgrade refers to the upgrade of the software version, while migration refers to moving the database from one location to another location.(he release version may or may not be changed)


Before performing migration or upgrade, verify the compatibility:

  • Compatibility of upgrade ( using DBUA/manual Method)

Before performing upgrade make sure that you are upgrading to supported version, refer to below document (This applicable to manual or DBUA upgrade)


Database Server Upgrade/Downgrade Compatibility Matrix (Doc ID 551141.1)


  • Compatibility of upgrade ( using Exp/Imp )

For upgrade using exp/imp follow below compatibility matrix:


Compatibility Matrix for Export And Import Between Different Oracle Versions [Video] (Doc ID 132904.1)


  • Compatibility of Migration

Before migrating the database to another OS make sure to verify the Endian compatibility as not all methods support cross platform migration.




Different Method of upgrade

Basically there are two ways of upgrading the database.

  • Running Catupgrd.sql ( Manual or DBUA)


Pre-Upgrade / Pre-Migration steps

Make sure to verify all pre-upgrade check in source Database, Execute Pre-upgrade script from source home as shown below:


This script will perform pre-upgrade check and provide recommendations. Follow all the recommendations/warnings by this script carefully. This will eliminate most of the known issues during upgrade. Also make sure that there is no component in invalid state or there is  no SYS/SYSTEM owned objects in invalid state before performing upgrade. Any invalid components, objects ( Owned by SYS/SYSTEM) can cause issue during upgrade.

Running these steps are often missed during migration, you need to fix existing issues in source database before migration if you will perform upgrade as well.


Different Method of Migration

There are multiple methods of upgrading and migrating database together. Below are some methods:

  • OS or SAN copy
  • RMAN backup and Restore
  • RMAN duplicate with noopen mode.
  • Other methods


Below method describe the different process:


Migration Of An Oracle Database Across OS Platforms (Generic Platform)(Doc ID 733205.1)



We invite you to attend a 1 hr webcast on this topic scheduled on 23rd April 2019 that explains these methods in details.


Upgrading an Oracle Database While Performing A Migration (Cross Platform or Within the Same OS)



Tuesday , April 23, 2019 08:00 AM (US Pacific Time)

Tuesday , April 23, 2019 05:00 PM (Central European Time)

Tuesday , April 23, 2019 08:30 PM (India Standard Time)


WebEx Conference Details

Event Number: 598 439 151

Event Passcode: 909090

Register for this Advisor Webcast:

Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.

What is AskTOM Office Hours?


Office Hours offers free, live Q&A sessions with experts in various aspects of Oracle technologies. Our experts are drawn from the ranks of Product Managers, developers, development managers and evangelists. We make ourselves available each month to help you be as successful as you can with Oracle products.


Why should you subscribe?


You don't have to register for an Office Hours on a particular topic. You can simply click on the Join Session button when you go to your Office Hours page within an hour of a session starting.

If you register, however, you will:

  • Get email reminders of your upcoming sessions (a weekly digest on Monday with all your upcoming Office Hours for that week + a reminder within an hour of the sesion starting).
  • Send a question to the experts in advance of the session. This will give us a chance to review the question and prepare an answer.
  • Rate the session and give us advice on how to improve Office Hours in the future.
  • Hear about new Office Hours sessions we've recently added.

So we hope you'll sign up and take full advantage of Office Hours!


How does Office Hours work?


The Office Hours page will display a Join Session within an hour of your session starting. Click on the link when the time is right, and you'll be placed in the conference, where we will take your questions via chat. 

Check the Get Ready section on the page in advance of the session, so you can make sure your browser and computer are configured properly to participate in the webcast.


And if I don't want reminders?


We understand. You probably get an awful lot of emails. You'll get at most one digest email on Monday, and one reminder before your session. But if you want to rely solely on your memory and calendar, just sign in to AskTOM, navigate to the Office Hours session and click on the Withdraw button. You will no longer receive notice of sessions on this topic.


What can I ask about?


Office Hours sessions will usually have a particular focus (example: exception handling in PL/SQL), but could also be a truly open Q&A session ("Ask me anything about PL/SQL").

The objective of Office Hours is to improve your expertise at utilizing Oracle Database to build your applications. Office Hours is not a substitute for My Oracle Support. We will not discuss tickets logged with Support. Office Hours is not a place to discuss licensing or account issues. You need to take up such issues with your Oracle account representative.

You are welcome to share code with us on an Office Hours session, but please be aware that other developers not working for your company may see that code. By sharing your code, you are also giving us permission to view and display that code to others.


Here are some of the upcoming AskTom Sessions that you might enjoy:


For a full list of upcoming sessions, go here:

In 12c, Oracle XML Database (XDB) is a mandatory component and cannot be uninstalled.  This is because there are many dependencies on XDB.  The following database components / features rely on XDB to be installed and in a VALID state:

•    Oracle Application Express (APEX)

•    Oracle Expression Filter

•    Oracle interMedia / Multimedia DICOM

•    Oracle Multimedia Image metadata extraction of EXIF, ORDIMAGE, IPTC and XMP metadata

•    Spatial (including Oracle Locator)

•    OLAP

•    Oracle Applications such as iRecruitment

•    Any procedure using UTL_TCP, UTL_HTTP, UTL_SMTP

•    XMLTYPE operations

•    XBRL (Extensible Business Reporting Language) from onwards


If the XDB component becomes invalid, this will cause the components above to become invalid as well.  XDB can become invalid for different reasons, but a common culprit is the setting for the LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH environment variable.  This environment variable is used to resolve the location of the shared library "" ( on HP).  If Oracle is not able to access that library, many different errors / symptoms can occur.


First, lets discuss which platform uses which environment variable:



Environment Variable Platforms
LD_LIBRARY_PATH           Linux, Solaris
SHLIB_PATH                       HP



Often times, the LD_LIBRARY_PATH environment variable is set for AIX.  However, that setting will have no affect as AIX uses the LIBPATH environment variable.  Keep this in mind if you are using AIX.


It is very important that the LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH environment variable be set correctly.  That is, the first directory referenced must be $ORACLE_HOME/lib.  You can reference other paths in this setting, but they cannot be the first path.


If this environment variable is not set correctly when the instance is last started, you may receive one or more of the following errors when installing XDB, upgrading XDB, configuring APEX, running an Export, selecting from xdb.xdb$resource/sys.dba_network_acls, or using other XDB features:

•    ORA-31159: XML DB is in an invalid state

•    ORA-00600: internal error code, arguments: [unable to load XDB library]

•    ORA-00600: internal error code, arguments: [qmx: no ref]

•    ORA-00600: internal error code, arguments: [qmtGetColumnInfo1]

•    ORA-00600: internal error code, arguments: [qmtb_init_len]

•    ORA-00600: internal error code, arguments: [qmtGetBaseType]

•    ORA-00600: internal error code, arguments: [psdnop-1], [600]

•    ORA-00600: internal error code, arguments: [qmtInit1]

•    ORA-07445: exception encountered: core dump [_memcpy()+224] [SIGSEGV] [Address not mapped to object]

•    ORA-19051 Cannot use fast path insert for this XMLType table

•    ORA-31011: XML parsing failed

•    ORA-00600: internal error code, arguments: [qmtcolcb_nomatch], [ID], [annotation], [], [], [], [], [], [], [], [], []


Errors of this sort generally occur when the init routines for the internal XDB functions are run in an invalid environment, causing memory corruption.


Many failed upgrades happen because the LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH environment variable is pointing to the wrong $ORACLE_HOME/lib directory rather than to the correct location for the instance, or it is not even set at all.



To check if the LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH environment variable was set correctly during the last database startup, run the following OS commands:


Determine the pid of the smon process:


% ps -ef | grep smon


Get the environment of the smon process:


% pargs -e  <pid from above> -- Solaris

% ps eauwww <pid from above> -- Linux and AIX


See the following document for more information:

(Doc ID 373303.1) How to Check the Environment Variables for an Oracle Process


If you find that the LD_LIBRARY_PATH / LIBPATH / SHLIB_PATH environment variable is not set or is not set correctly, do the following:


1. Shutdown the database


2. Set LD_LIBRARY_PATH (LIBPATH for AIX or SHLIB_PATH for HP) as follows:


    csh: setenv LD_LIBRARY_PATH $ORACLE_HOME/lib:<other paths>

    ksh: export LD_LIBRARY_PATH=$ORACLE_HOME/lib:<other paths>


3. On AIX only, run /usr/sbin/slibclean as root to remove any currently unused modules in the kernel and library memory.


4. Restart the database.


If you are using RAC and startup the database with the SVRCTL command, see the document below:

(Doc ID 733567.1) Setting Environment Variables Through Srvctl


We hope this information is helpful. For more information on the installation of XDB and best practices, see the following document:

(Doc ID 1292089.1) Master Note for Oracle XML Database (XDB) Install / Deinstall

Oracle Trace File Analyzer 18.4.1

New Release Trace File Analyzer 18.4.1  Now Available


New SRDCs (Service Request Diagnostic Collections)

The following new SRDCs are now available:

  • dbdatapatch - for Datapatch problems
  • dbstandalonedbca - for Database Configuration Assistant problems
  • dbspatialinstall - for Oracle Spatial Installation problems
  • dbrmanperf - has been updated to include diagnostic data collection for RMAN Backup, RMAN Maintenance, RMAN Restore and Recover, RMAN-08137 or RMAN-08120, and RMAN Performance problems
  • dbtextupgrade - for Oracle Text version and later upgrade problems
  • ORA-04023 - for ORA-04023 problems
  • ORA-04063 - for ORA-04063 problems
  • dbspatialexportimport - for Oracle Spatial Export/Import problems


As with all other SRDCs use tfactl diagcollect -srdc srdc_name.

Merging of multiple automatic SRDCs

If TFA detects a significant event, then it waits five minutes to check any other events, which occur as a result, and collects all of these.

If there are multiple SRDC related events, which occur on the same Database within that five minute period, then TFA performs one merged SRDC collection for all of them.

For example: If an ORA-00600 event occurs and 30 seconds later an ORA-04031 event occurs, then TFA performs one SRDC that collects everything necessary to resolve both the errors.  Merging of SRDCs applies to the following:


  • ORA-07445
  • ORA-04030
  • ORA-04031

Over future releases the automatic collection of specific SRDCs will be extended to cover other events.

Running SRDC commands in silent mode

SRDCs have prompts for problem details, which are specific to the SRDC type.

The answers to these prompts can now be provided directly from the command line, allowing the full automation of all SRDCs.

You can view the SRDC-specific prompt by running:

tfactl diagcollection -srdc {srdc_type} -help

Step by Step Instructions for Detection, Diagnosis & Resolution of ORA-600 (ORA-00600 Internal error)


As a DBA it is not uncommon to find ORA-00600 errors in your alert log.


Because there is no other context to the ORA-600 it can be difficult to know if it's something you need to act on.


This post guides you through what the error means, how to use Oracle Trace File Analyzer (TFA) to detect it, how to diagnose the cause and then how to resolve.

Here's what's covered below:



This is the overall process for detection, diagnosis & resolution:

ORA-600 detection diagnosis and resolution process.jpg


What is an ORA-600 (ORA-00600 internal error)?

ORA-00600 internal error, also referred to as ORA-600 is a generic error code. The Oracle Database uses ORA-00600 when it encounters an unexpected internal condition.

The error will look like this:

Thu Sep 13 08:09:49 2018

Errors in file /scratch/app/oradb/diag/rdbms/ogg11204/ogg112041/trace/ogg112041_ora_8881.trc  (incident=99577):

ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []

Incident details in: /scratch/app/oradb/diag/rdbms/ogg11204/ogg112041/incident/incdir_99577/ogg112041_ora_8881_i99577.trc

The arguments in square brackets will contain different values. These depend on where in the code the problem occurred and what it was doing.


The value of the first argument shows where in the code the error occurred. Later arguments contain other runtime values such as variable values.


The first argument and database version is often enough to identify a known problem. But as functions get invoked in many different ways, Oracle Support may also need to use the error stack trace for more specific analysis. This shows the code path taken leading up to the error.




1) Detecting ORA-00600 Errors

The first step is to detect when an ORA-00600 error occurs. For this you should make sure the latest Oracle Trace File Analyzer (TFA) is installed and email notifications are enabled.


You may already have TFA installed. It’s included with Oracle Grid Infrastructure and and with the 12.2 Database. For more details on TFA, take a look at the TFA Overview Guide.


Oracle releases new versions of TFA every 3 months. New versions include bug fixes and enhancements to the data collected.  So you should upgrade to the latest if possible.


For the purposes of ORA-00600 error detection, the earliest version of TFA you should have is 18.3.0.


TFA installation

Perform the TFA installation as the root user.


Installation will:

Install on all nodes if clustered

Auto discover relevant Oracle Software & Exadata Storage Servers

Start monitoring all discovered items for significant events

Automatically collect all required diagnostics when significant errors occur and send you email notification


To install TFA follow these steps:


1) Download latest version from My Oracle Support Document 1513912.1, transfer to required server & unzip

Execute the installer (if the database is clustered the installer takes care of the remote nodes as well):




2) Configure the email(s) to use for notification when TFA detects significant errors

To set the notification email to use for a specific ORACLE_HOME, include the operating system owner in the command:


tfactl set notificationAddress=os_user:email


For example:


tfactl set


To set the notification email to use for any ORACLE_HOME:


tfactl set notificationAddress=email


For example:


tfactl set



3) Configure the SMTP server for TFA to use when sending email notifications.

Set the SMTP parameters when prompted:


tfactl set smtp


You can verify the smtp configuration using:


tfactl sendmail email_address


When TFA detects an ORA-00600 error has occurred (or other events), you will get an email notification like this:

ORA-600 Notification.jpg




What to do if you can’t install TFA as root?

If you are unable to install as root it is still useful to use TFA for diagnostic collection.


You will not be able to detect errors or collect from remote hosts unless installed as root.


1) Download the latest version from My Oracle Support Document 1513912.1, transfer to each required server & unzip:

2) Execute the installer:


./installTFA<platform> -extractto <dir> -javahome <jre_home>
Use the JRE 1.8 or higher, already available in the RDBMS home.



If you install as a non-root user, TFA will not be able to automatically detect ORA-00600, so to collect diagnostic packages you will need to follow the steps in the next section, where you will perform an on-demand diagnostic collection.



What to do if the ORA-00600 error occurred before you had TFA installed

If the ORA-00600 error occurred before you had TFA 18.3.0+ installed you can still use TFA to collect all necessary diagnostics.


As the oracle user or equivalent run:


tfactl diagcollect –srdc ORA-00600



This will prompt for time of the event and database, then show you all corresponding errors found and let you collect diagnostics for the appropriate one.


Then it will collect all diagnostics and package them in a single zip file, ready to move onto the next step.


For example:

bash-4.1$ /scratch/app/ diagcollect -srdc ORA-00600

Enter the time of the ORA-00600 [YYYY-MM-DD HH24:MI:SS,<RETURN>=ALL] : 2018-09-13

Enter the Database Name [<RETURN>=ALL] :

No events matching the timestamp Sep/12/2018 18:00:00-Sep/13/2018 06:00:00.

The timestamp must be between Sep/13/2018 07:08:40 and Sep/13/2018 10:17:18.

  1. 1. Sep/13/2018 10:17:18 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []
  2. 2. Sep/13/2018 09:41:24 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []
  3. 3. Sep/13/2018 09:31:29 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []
  4. 4. Sep/13/2018 08:38:43 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []
  5. 5. Sep/13/2018 08:09:49 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], []


Please choose the event : 1-5 [1] 1

Selected value is : ( 1. Sep/13/2018 10:17:18 : [ogg11204] ORA-00600: internal error code, arguments: [ktfbtgex-7], [1015817], [1024], [1015816], [], [], [], [], [], [], [], [] )

Scripts to be run by this srdc: ipspack rdahcve1210 rdahcve1120 rdahcve1110

Components included in this srdc: OS CRS DATABASE NOCHMOS

Collecting data for local node(s)

Scanning files from Sep/13/2018 04:17:18 to Sep/13/2018 16:17:18

Collection Id : 20180914032048myserver69

Detailed Logging at : /scratch/tfa_repository/srdc_ora600_collection_Fri_Sep_14_03_20_48_PDT_2018_node_local/diagcollect_20180914032048_myserver69.log

2018/09/14 03:20:53 PDT : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom

2018/09/14 03:20:53 PDT : Collection Name :

2018/09/14 03:20:54 PDT : Scanning of files for Collection in progress...

2018/09/14 03:20:54 PDT : Collecting additional diagnostic information...

2018/09/14 03:21:04 PDT : Getting list of files satisfying time range [09/13/2018 04:17:18 PDT, 09/13/2018 16:17:18 PDT]

2018/09/14 03:21:21 PDT : Collecting ADR incident files...

2018/09/14 03:22:27 PDT : Completed collection of additional diagnostic information...

2018/09/14 03:22:31 PDT : Completed Local Collection


|          Collection Summary          |


| Host       | Status    | Size | Time |


| myserver69 | Completed | 20MB |  97s |


Logs are being collected to: /scratch/tfa_repository/srdc_ora600_collection_Fri_Sep_14_03_20_48_PDT_2018_node_local




2. Compare diagnostics to known problems

ORA-00600 diagnosis can be done automatically using the My Oracle Support ORA-00600 Troubleshooting Tool.


The ORA-00600 Troubleshooting Tool performs the same actions as an older tool you may have heard of the “ORA-600 lookup tool”, but it does it automatically and can also be used to log SRs.



On the first screen choose to “Troubleshoot a new issue” and then click next

ORA-600 Troubleshooting Tool Step 1-highlight.jpg


Now choose the zip file TFA created, choose upload and then next

ORA-600 Troubleshooting Tool Step 2-highlight.jpg


The ORA-00600 Troubleshooting tool will now analyze the contents of the diagnostics in the TFA collected zip file and compare it to known problems.

Follow the instructions on these to understand more about the problem and how to fix it.

ORA-600 Troubleshooting Tool Step 3.jpg



3. Create a new SR

If the troubleshooting tool is unable to find a known problem, or you need more help then scroll down and choose Create SR

ORA-600 Troubleshooting Tool Step 4-highlight.jpg


On the SR creation screen, complete all fields and choose Create SR

ORA-600 Troubleshooting Tool Step 5.jpg


This will create a new SR and automatically attach the TFA diagnostic collection you previously provided.

Oracle Support will now complete further analysis on the diagnostics and work with you to help resolve the problem.




In summary

ORA-00600 is a generic error code. The Oracle Database uses ORA-00600 when it encounters an unexpected internal condition.

Use Oracle Trace File Analyzer (TFA) to automatically detect ORA-00600 errors.

Alternatively TFA can be used on-demand after an ORA-00600 to collect and package all required diagnostics.

Use the My Oracle Support ORA-00600 Troubleshooting Tool to compare the TFA collected diagnostics to known problems

If you need further help continue with the Troubleshooting Tool to log an SR.

This will automatically upload the TFA diagnostic collection, providing Oracle Support everything they need to help you resolve the problem.


For more information and to download Oracle Trace File Analyzer see Document 1513912.1.


A new release of Oracle Database will soon be available. We encourage you to start planning now to stay current.


We recommend the following upgrade options:


  • If you are currently running and you have an Unlimited License Agreement with no immediate need to upgrade, Oracle Database 19c is a good choice. The production release of Oracle Database 19c will be available in CY2019.
  • If you are currently running and you need to upgrade now, 18c is a good choice.


NOTE: Oracle Database 18c will be supported for two years after Oracle Database 19c platforms have released on premises. Oracle Database 19c is the Long Term Support Release or “Terminal Patch Set” for DB 12.2.


The purpose of these recommendations is to help you stay within the guidelines of Lifetime Support and Error Correction Policies (allowing you to avoid costly support exceptions), and to encourage you to take advantage of the new features that are available in the latest product releases.


For questions related to certification combinations and Lifetime Support agreements with other Oracle products like Oracle E-Business Suite, Oracle JD Edwards, Oracle PeopleSoft, and Oracle Siebel, please contact your Oracle Account Representative.


For additional information, refer to Release Schedule of Current Database Releases (Doc ID 742060.1).



Oracle Support

New Release Trace File Analyzer 18.3.0  Now Available

Automatic SRDCs

Oracle Trace File Analyzer will now automatically invoke the relevant SRDC when it detects the following events:

  • ORA-00600
  • ORA-04030
  • ORA-04031
  • ORA-07445


  • Add the user Tomcat runs as to the TFA access list tfactl access add -user {tomcat_user}



  • dbsqlperf - for SQL performance problems
  • dbtde - for Transparent Data Encryption (TDE) problems
  • emagentperf - for Enterprise Manager Agent performance problems
  • emomscrash - for Enterprise Manager crash problems
  • emomsheap - for Enterprise Manager java heap usage or performance problems
  • emomshungcpu - for Enterprise Manager OMS crash, restart or performance problems

As with all other SRDCs use tfactl diagcollect -srdc srdc_name.

ORAchk automatic start from install

Oracle Trace File Analyzer root installations on Linux or Solaris on non-engineered systems will automatically setup and run the ORAchk daemon.

The daemon will be restarted at 1am every morning, in order to discover any environment changes. A full local ORAchk run will be performed at 2am every morning and a partial run of the most impactful checks will be run every 2 hours via the oratier1 profile.

Any collections older than 2 weeks will automatically be purged.

Once auto start is enabled the daemon settings can be changed as per normal and auto start can be removed any time by using tfactl run orachk -autostop

Oracle Database 18c (18.3) has officially been released for download on Linux  x86-64. See KM Doc 742060.1 for more details on the release schedule for other  platforms.


This is the first version of the database to follow the new yearly release  model.  Read KM Doc 2285040.1 for more details.  For practical  purposes, you can think of Oracle Database 18c as a Patchset of the Oracle  Database 12c release. However, instead of being named “Oracle Database 12c Release  2”,  the new naming structure reflects the year in which the  product released.
Oracle Database 18c contains a lot of small, but extremely useful incremental  improvements, most of which focus on the three key marquee features in Oracle  Database 12c Release 2:


  •     Multitenant
  •     Database In-Memory
  •     Oracle Database Sharding


You can read more  details on the changes and improvements in Maria Colgan's Oracle Database blog post or in the Oracle  Database 18c documentation.

REST service

Oracle Trace File Analyzer now includes REST support, allowing invocation & query over HTTPS.

To facilitate development and testing this REST support, Oracle REST Data Services (ORDS) is included within the install.

To enable REST, start ORDS: tfactl rest -start.

The standalone ORDS setup feature utilizes file based user authentication and is provided solely for use in test and development environments.

For production use, the included ords.war should be deployed and configured.

REST supports printing details, starting a diagcollect and downloading collections

See the documentation for full API details and examples

Oracle Cluster Health Advisor integration

Oracle Trace File Analyzer now integrates with Oracle cluster Health Advisor and consumes the problem events Oracle Cluster Health Advisor detects.

When Oracle Cluster Health Advisor detects a problem event Oracle Trace File Analyzer will automatically trigger the relevant diagnostic collection and send email notification.

Email notification is configured via the standard Oracle Trace File Analyzer notification process.


The following new SRDCs are now available:

  • ORA-01031 - for ORA-01031 errors
  • ORA-01578 - for ORA-01578 errors
  • ORA-08102 - for ORA-08102 errors
  • ORA-08103 - for ORA-08103 errors
  • dbblockcorruption - for problems showing alert log messages of "Corrupt block relative dba"
  • dbfs - for ASM/DBFS/DNFS/ACFS problems
  • dbpartition - for create/maintain partitioned/subpartitioned table/index problems
  • dbpartitionperf - for slow create/alter/drop commands against partitioned table/index
  • dbsqlperf - for SQL performance problems
  • dbundocorruption - for UNDO corruption problems
  • esexalogic for Exalogic full Exalogs data collection information
  • listener_services - for listener errors : TNS-12516 / TNS-12518 / TNS-12519 / TNS-12520
  • naming_services - for naming service errors: ORA-12154 / ORA-12514 / ORA-12528
  • dbaudit - Standard Information for Database Auditing


  • dbrman600 - for RMAN-00600 error
  • dbrmanperf - for RMAN Performance problems

As with all other SRDCs use tfactl diagcollect -srdc srdc_name.

Use external SMTP server for notifications

To configure the SMTP server that Oracle Trace File Analyzer will use for notifications run tfactl set smtp

You can check the current SMTP configuration details using tfactl print smtp

Once configured you can then verify configuration by sending a test email with tfactl sendmail {email_address}

Metadata search capability

All metadata stored in the Oracle Trace File Analyzer index is now searchable using tfactl search -showdatatypes|-json [json_details].

Searching for all events for a particular database between certain dates can be done like this:

tfactl search -json '{ "data_type":"event", "content":"oracle", "database":"rac11g", "from":"01/20/2017 00:00:00", "to":"12/20/2018 00:00:00" }' 

Listing all index events can be done with: tfactl search -json '{"data_type":"event"}'

Listing all available datatypes can be done with :tfactl search -showdatatypes

3 X Faster Performance


ORAchk & EXAchk have been completely rewritten, with a focus on performance and extensibility.

Internal testing results show this version runs up to 3 X faster than Performance is particularly improved on lower resourced environments.


Option to Only Run Checks That Previously Failed


New option allows ORAchk/EXAchk to only run checks that failed previously:


    Generate a health check report

    Fix the issues identified

    Generate another health check report verifying only the issues that failed before


Use the failed checks option by passing in the html report, zip or directory

-failedchecks previous_result


Windows Support Without Requiring Cygwin


As ORAchk has now been rewritten in Python, it no longer has the need for Cygwin on Windows.

ORAchk just needs Python 2.7 software and libraries. The runtime will provide details of exactly what libraries it needs.


Run Multiple Instances at the Same Time


It is now possible to run multiple instances of ORAchk/EXAchk at the same time on the same machine.


As a result of this change the location of the and have now changed and are by default created in the user's home directory.


If you run as non-root, this location change will need to updated in the /etc/sudoers file, for example:


Previous /etc/sudoers entry:

oracle ALL=(root) NOPASSWD:/home/oracle/.orachk/


New /etc/sudoers entry:

oracle ALL=(root) NOPASSWD:/home/oracle/


The location of this script can be changed by setting the environment variable RAT_ROOT_SH_DIR

export RAT_ROOT_SH_DIR=/some/other/location


ORAchk support for Grid Infrastructure with no Database


ORAchk now provides support for Grid Infrastructure stand-alone checks, where no database is installed.


To run grid infrastructure checks in an environment with no rdbms installed use the option:



New Oracle Stack Health Checks


As always this release of ORAchk and EXAchk contain many new Oracle Stack Health Checks.


For full details of the new checks see:


   ORAchk Health Check Catalog

   EXAchk Health Check Catalog


The Health Check Catalog is also contained within the download.

Oracle Trace File Analyzer
Version Released
Trace File Analyzer released and now available to download.
New Features in
bulletSimplified Real-Time Summary Report
bulletCloud Collection Upload to Bug or TFAWeb
bulletAutomatic Collection Upload to SRs
bulletFaster and Easier Service Request Data Collection
bulletFixed issues
Release Resources


New Feature Slide Deck (source slides)
Main TFA Customer Slide Deck (source slides)
TFA Overview White Paper

TFA Documentation


Simplified Real-Time Summary Report

The summary command has been simplified to provide a quick access, real-time view of all key database components.

It shows any potential problems with important elements such as:

  • Database
  • Clusterware
  • ASM
  • Patch
  • Listener
  • Network
  • OS

To run summary use: tfactl summary


Cloud Collection Upload to Bug or TFAWeb

Oracle Cloud environments can now use TFA to upload diagnostics directly to bugs and / or TFAWeb.

Upload diagnostics to TFAWeb and make use of the machine learning powered, Anomaly Timeline and analyzers.


Upload diagnostics to both TFAWeb and directly to a bug for developer analysis:



tfactl upload -bugsftp -bug <bug#> -user <UserId> <file1 file2>

Automatic Collection Upload to SRs

TFA can automatically upload diagnostics to an SR as soon as it's collected them.


Store the My Oracle Support credentials securely in the wallet with:  tfactl setupmos


Then add the -sr <sr_number> option to any diagcollect command to automatically upload the resulting collection to the Service Request. For example:


TFA can also upload any other file to a Service Request. Upload using the wallet, which was setup previously:
tfactl upload -wallet -sr <sr_number> <file1 file2>


Or upload without the wallet, which will prompt for the password:
tfactl upload -user <user_id> -sr <sr_number> <file1 file2>


Faster and Easier Service Request Data Collection

Oracle Trace File Analyzer includes new One Command SRDCs (Service Request Data Collections).


These make it fast and easy to collect exactly the right diagnostic data if you need help from Oracle Support.


These are the new SRDCs:

Type of ProblemSRDC Types
ORA Errors
  • ORA-01628
  • ORA-00020
  • ORA-30036
  • ORA-00060
  • ORA-01555
Database upgrade problems
  • dbpreupgrade
Database storage problems
  • asm
Excessive SYSAUX Space is Used by the Automatic Workload Repository (AWR)
  • dbawrspace
Database startup / shutdown problems
  • dbshutdown
  • dbstartup
Data Guard problems
  • dbdataguard
Enterprise Manager target discovery / add problems
  • emcliadd
  • emclusdisc
  • emdbsys
  • emgendisc
  • emprocdisc
Enterprise Manager OMS restart problems
  • emrestartoms


To run an SRDC collection use: tfactl diagcollect -srdc <srdc_type>

For more help run: tfactl diagcollect -srdc -h

Fixed issues

Issues fixed in this release can be seen with this Bug query

New Release Trace File Analyzer  Now Available

Faster & Easier SR Data Collection

Oracle Trace File Analyzer includes the new One Command SRDCs (Service Request Data Collections).

These make it fast and easy to collect exactly the right diagnostic data if you need help from Oracle Support.

The new SRDCs include:

Type of ProblemSRDC TypesCollection Scope
Database patching problems
  • dbpatchinstall
  • dbpatchconflict
Local only
Database install / upgrade problems
  • dbinstall
  • dbupgrade
Local only
Enterprise Manager tablespace usage metric problems
  • emtbsmetrics
Local only
(on EM Agent target)
Enterprise Manager general metrics page or threshold problems
  • emmetricalert
Local only
(on EM Agent target & OMS)
Enterprise Manager debug log collection
  • emdebugon
  • emdebugoff
Local only
(on EM Agent target & OMS)

Quick System & Cluster Summary

The new summary command can be used to quickly understand the viability of systems and clusters.

It shows a fast, easy to read summary of the status including any potential problems with important elements such as:

  • Clusterware
  • ASM
  • Patch
  • Listener
  • OS
  • Network

Summary is currently only supported on Linux.

Log Maintenance Made Easy

Use Oracle Trace File Analyzer to either automatically or manually purge database logs.

Automatic purging is OFF by default, except on a Domain Service Cluster (DSC).

The new managelogs command manages ADR log and trace files, including:

  • Shows disk space usage of individual diagnostic destinations
  • Purges logs and trace files based on diagnostic location and or age

Oracle Trace File Analyzer will automatically check disk usage and record snapshots for later diagnosis.

Simple IPS (Incident Packaging Service) Interface

The Oracle Database stores details of certain problems in ADR (Automatic Diagnostic Repository).


IPS packages these details for later diagnosis.

Oracle Trace File Analyzer can invoke IPS to query and collect these packages.

Microsoft Windows Support

This release adds support for Microsoft Windows, on all versions supported by the Database.

Easy to Use Menu Interface

As well as the command line interface and the shell interface. This release adds a menu interface for simple access to diagnostic operations.

This starts with the new menu command.

Built on the Latest, Most Secure Java release 1.8

Oracle Trace File Analyzer now uses the latest Java features; this has made JRE 1.8 a requirement.

JRE 1.8 is already shipped with the latest versions of the 12c Database and Grid Infrastructure. It is also included with the download install.

The JRE is platform specific. As Oracle Trace File Analyzer includes the JRE, the install is now platform specific.

Oracle introduced a new feature, adaptive cursor sharing, in 11g, to improve the plans that are selected for queries containing bind variables. This feature can result in more cursors for the same query containing bind variables. We'll explain why in this article. Before we get into the details, let's review a little history. Oracle introduced the bind peeking feature in Oracle 9i. With bind peeking, the Optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor. This allows the optimizer to determine the selectivity of any WHERE clause condition as if literals have been used instead of bind variables, thus improving the quality of the execution plan generated for statements using bind variables. However, there was a problem with this approach, when the column used in the WHERE clause with the bind contained a data skew. If there is data skew in the column, it is likely that a histogram has been created on this column during statistics gathering. When the optimizer peeks at the value of the user-defined bind variable and chooses a plan, it is not guaranteed that this plan will be good for all possible values for the bind variable. In other words, the plan is optimized for the peeked value of the bind variable, but not for all possible values. In 11g, the optimizer has been enhanced to allow multiple execution plans to be used for a single statement that uses bind variables. This ensures that the best execution plan will be used depending on the bind value. Let's look at an example to see exactly how this works. Assume I have simple table emp which has 100,000 rows and has one index called emp_i1 on deptno column.

SQL> desc emp


Name                   Null?    Type

---------------------- -------- ----------------------------------

ENAME                           VARCHAR2(20)

EMPNO                           NUMBER

PHONE                           VARCHAR2(20)

DEPTNO                          NUMBER

There is a data skew in the deptno column, so when I gathered statistics on the emp table, Oracle automatically created a histogram on the deptno column.

SQL> select table_name, column_name, histogram from user_tab_cols;



------------------ ------------------ ---------------

EMP                DEPTNO             HEIGHT BALANCED

EMP                EMPNO              NONE

EMP                ENAME              NONE

EMP                PHONE              NONE

Now I will execute a simple select on my emp table, which has a single WHERE clause predicate on the deptnocolumn. The predicate contains a bind variable. We will begin by using the value 9 for this bind variable. The value 9 occurs 10 times in the table, i.e. in 0.0001% of the rows.

SQL> exec :deptno := 9


SQL> select /*ACS_1*/ count(*), max(empno)

2  from emp

3  where deptno = :deptno;




---------- ----------

10         99

Given how selective the value 9 is, we should expect to get an index range scan for this query. Lets check the execution plan.

SQL> select * from table(dbms_xplan.display_cursor);




SQL_ID 272gr4hapc9w1, child number 0


select /*ACS_1*/ count(*), max(empno) from emp where deptno = :deptno


Plan hash value: 3184478295


| Id | Operation                    | Name  | Rows | Bytes |Cost (%CPU)|


|  0 | SELECT STATEMENT             |       | |       |    2 (100)|

|  1 |  SORT AGGREGATE              |       | 1|    16 |           |

|  2 |   TABLE ACCESS BY INDEX ROWID| EMP   | 1|    16 |    2 (0)|

|  3 |    INDEX RANGE SCAN          | EMP_I1|     1| |    1   (0)|


So we got the index range scan that we expected. Now let's look at the execution statistics for this statement

SQL> select child_number, executions, buffer_gets,

2 is_bind_sensitive, is_bind_aware

3  from v$sql

4  where sql_text like 'select /*ACS_1%';



------------ ---------- ----------- ----------------- -------------

0          1          53 Y                 N

You can see we have one child cursor that has been executed once and has a small number of buffer gets. We also see that the cursor has been marked bind sensitive. A cursor is marked bind sensitive if the optimizer believes the optimal plan may depend on the value of the bind variable. When a cursor is marked bind sensitive, Oracle monitors the behavior of the cursor using different bind values, to determine if a different plan for different bind values is called for. This cursor was marked bind sensitive because the histogram on the deptno column was used to compute the selectivity of the predicate "where deptno = :deptno". Since the presence of the histogram indicates that the column is skewed, different values of the bind variable may call for different plans. Now let's change the value of the bind variable to 10, which is the most popular value for the deptno column. It occurs 99900 times in the table, i.e in 99.9% of the rows.

SQL> exec :deptno := 10


SQL> select /*ACS_1*/ count(*), max(empno)

2  from emp

3  where deptno = :deptno;



---------- ----------

99900 100000

We expect to get the same plan as before for this execution because Oracle initially assumes it can be shared. Let's check:

SQL> select * from table(dbms_xplan.display_cursor);




SQL_ID 272gr4hapc9w1, child number 0


select /*ACS_1*/ count(*), max(empno) from emp where deptno = :deptno


Plan hash value: 3184478295


| Id | Operation                    | Name  | Rows | Bytes |Cost (%CPU)|


|  0 | SELECT STATEMENT             |       | |       |    2 (100)|

|  1 |  SORT AGGREGATE              |       | 1|    16 |           |

|  2 |   TABLE ACCESS BY INDEX ROWID| EMP   | 1|    16 |    2 (0)|

|  3 |    INDEX RANGE SCAN          | EMP_I1|     1| |    1   (0)|


The plan is still an index range scan as before, but if we look at the execution statistics, we should see two executions and a big jump in the number of buffer gets from what we saw before.

SQL> select child_number, executions, buffer_gets,

2    is_bind_sensitive, is_bind_aware

3  from v$sql

4  where sql_text like 'select /*ACS_1%';



------------ ---------- ----------- ----------------- -------------

0          2        1007 Y                 N

You should also note that the cursor is still only marked bind sensitive and not bind aware at this point. So let's re-execute the statement using the same popular value, 10.

SQL> exec :deptno := 10


SQL> select /*ACS_1*/ count(*), max(empno)

2  from emp

3  where deptno = :deptno;



---------- -----------

99900 100000

Behind the scenes during the first two executions, Oracle was monitoring the behavior of the queries, and determined that the different bind values caused the data volumes manipulated by the query to be significantly different. Based on this difference, Oracle "adapts" its behavior so that the same plan is not always shared for this query. Hence a new plan is generated based on the current bind value, 10. Let's check what the new plan is.

SQL> select * from table(dbms_xplan.display_cursor);




SQL_ID 272gr4hapc9w1, child number 1


select /*ACS_1*/ count(*), max(empno) from emp where deptno = :deptno


Plan hash value: 2083865914


| Id  | Operation            | Name  | Rows | Bytes | Cost (%CPU)|


|   0 | SELECT STATEMENT     |       | |       |   240 (100)|

|   1 |  SORT AGGREGATE      | |     1  | 16  |            |

|*  2 |   TABLE ACCESS FULL  | EMP | 95000  | 1484K |   240 (1)|


Given how unselective the value 10 is in the table, it's not surprising that the new plan is a full table scan. Now if we display the execution statistics we should see an additional child cursor (#1) has been created. Cursor #1 should show a number of buffers gets lower than cursor #0 and it is marked both bind sensitive and bind aware. A bind aware cursor may use different plans for different bind values, depending on how selective the predicates containing the bind variable are. Looking at the execution statistics:

SQL> select child_number, executions, buffer_gets,

2    is_bind_sensitive, is_bind_aware

3  from v$sql

4  where sql_text like 'select /*ACS_1%';



------------ ---------- ----------- ----------------- -------------

0          2        1007 Y                 N

1          1 821 Y                 Y 

we see that there is a new cursor, which represents the plan which uses a table scan. But if we execute the query again with a more selective bind value, we should use the index plan:

SQL> exec :deptno := 9


SQL> select /*ACS_1*/ count(*), max(empno)

2  from emp

3  where deptno = :deptno;



---------- ----------

10         99

SQL> select * from table(dbms_xplan.display_cursor);




SQL_ID 272gr4hapc9w1, child number 2


select /*ACS_1*/ count(*), max(empno) from emp where deptno = :deptno


Plan hash value: 3184478295


| Id | Operation                    | Name  | Rows | Bytes |Cost (%CPU)|


|  0 | SELECT STATEMENT             |       | |       |    2 (100)|

|  1 |  SORT AGGREGATE              |       | 1|    16 |           |

|  2 |   TABLE ACCESS BY INDEX ROWID| EMP   | 1|    16 |    2 (0)|

|  3 |    INDEX RANGE SCAN          | EMP_I1|     1| |    1   (0)|


The proper plan was chosen, based on the selectivity produced by the current bind value. There is one last interesting thing to note about this. If we look at the execution statistics again, there are three cursors now:

SQL> select child_number, executions, buffer_gets,

2    is_bind_sensitive, is_bind_aware, is_shareable

3  from v$sql

4  where sql_text like 'select /*ACS_1%';



------------ ---------- ----------- --------- --------- ----------

0          2         957 Y         N         N

1          1         765 Y         Y         Y

2          2           6 Y         Y Y

The original cursor was discarded when the cursor switched to bind aware mode. This is a one-time overhead. Note that the cursor is marked as not shareable (is_shareable is "N"), which means that this cursor will be among the first to be aged out of the cursor cache, and that it will no longer be used. In other words, it is just waiting to be garbage collected. There is one other reason that you may notice additional cursors for such a query in 11g. When a new bind value is used, the optimizer tries to find a cursor that it thinks will be a good fit, based on similarity in the bind value's selectivity. If it cannot find such a cursor, it will create a new one (like above, when one (#1) was created for unselective "10" and one (#2) was created for highly-selective "9"). If the plan for the new cursor is the same as one of the existing cursors, the two cursors will be merged, to save space in the cursor cache. This will result in one being left behind that is in a not shareable state. This cursor will be aged out first if there is crowding in the cursor cache, and will not be used for future executions.

Q & A

Instead of answering the questions in your comments one by one, I am going to summarize the questions and provide my answers here.

Q: Is this behavior managed by 11g optimizer automatically and we don't need cursor_sharing anymore?

A: We have not changed the behavior of the cursor_sharing parameter yet, for backwards compatibility purposes. So if you set it to similar, adaptive cursor sharing will only kick in for queries where the literals are replace with binds. We hope that in the future, this feature will persuade people to set cursor_sharing to force.

Q: Would it have any impact like holding library cache latches for longer time to search for appropriate child cursor.

A: Any additional overhead in matching a cursor is always a concern, and we strive to minimize the impact. There is of course some increase in the code path to match a bind-aware cursor, since it requires more intelligent checks. This feature should not, however, impact cursors which are not yet marked bind-aware.

Q: What triggers a cursor to be marked "bind sensitive"?

A: Our goal is to consider many types of predicates where the selectivity can change when the bind value changes.
In this first version of the feature, we only handle equality predicates where a histogram exists on the column and range predicates (with or without histogram). We do not currently consider LIKE predicates, but it is on the top of our list for future work.

Q: Also it sounds like the optimizer is using the number of rows returned to decided that it's time for a new plan...

A: I am not going to go into the details of the "special sauce" for how we decide to mark a cursor bind-aware. The number of rows processed is one input.

Q: Are you planning a hint to mark statements as bind-aware ?

A: Yes, we plan to add this in the future. This will allow users to bypass the startup cost of automatically determining that a query is a good candidate for bind-aware cursor sharing.

Welcome to the My Oracle Support Community! We highly encourage you to personalize your community display name to make your activity more memorable. Please see for instructions.