In this Blog:



September is Oracle Database Performance month

September is Oracle Database performance month for our blog series. Throughout the month, we will highlight some of the great performance tools we recommend for your database toolkit.


Why should you read this blog and revisit it during September ? Well, there are lost of reasons

  • Take the lead in your team to quickly and easily identify heavy loads or performance issues on your system and put action plans in place to improve the performance
  • Create your own performance toolkit
  • Recognise potential performance issues before they arise
  • Deal with performance issues confidently
  • Ensure all diagnostic data is collected for the relevant time frame when engaging Oracle Support


Go ahead, read on and find out for yourself !


You might also consider completing the Oracle Support Database Accreditation to increase your expertise with tools and best practices to support your Oracle Database. The accreditation also includes a focus on the performance tools we will highlight throughout September. What do the coolest database professionals have in their tool kits? Let’s find out.


For further details about Oracle Support Database Accreditation and to learn more about Performance Tuning and Diagnostics


Performance Toolkit.png


Database Performance Tools ADDM, AWR and ASH


Performance tuning and problem diagnosis are the two most challenging and important management tasks that any database administrator performs. Hence, let’s start our performance series with some of the tools that are delivered with the database:



The Oracle database automatically collects and stores workload information in the Automatic Workload Repository (AWR). AWR offers key information (e.g. wait classes, metrics, OS stats) and the repository is maintained automatically and html reports can be produced.


AWR can be used to identify:

  • SQLs or Modules with heavy loads or potential performance issues, symptoms of those heavy loads (e.g. logical I/O (buffer gets), Physical I/O, contention, waits)
  • SQLs that could be using sub-optimal execution plans (e.g. buffer gets, segment statistics), numbers of executions, parsing issues.
  • General performance issues, e.g. system capacity (I/O, memory, CPU), system/DB configuration. SGA (shared pool/buffer cache) and PGA sizing advice.



The Automatic Database Diagnostic Monitor (hereafter called ADDM) is an integral part of the Oracle RDBMS capable of gathering performance statistics and advising on changes to solve any existing performance issues measured. For this, ADDM uses the Automatic Workload Repository (AWR), a repository defined in the database to store database wide usage statistics at fixed size intervals (60 minutes).


To make use of ADDM, a PL/SQL interface called DBMS_ADVISOR has been implemented. This PL/SQL interface may be called directly through the supplied $ORACLE_HOME/rdbms/admin/addmrpt.sql script or used in combination with Oracle Enterprise Manager.




Active Session History (ASH) acquires the active session’s activity information by sampling it from the database kernel’s session state objects. The quantity of information sampled by ASH could be quite voluminous, so ASH maintains a fixed-size circular buffer allocated during database start-up time in the database System Global Area (SGA).


The ASH data is periodically flushed to disk and stored in the Automatic Workload Repository (AWR) which we covered in the previous blog. The information can be used for drilldown purposes during problem diagnosis or performance tuning. In addition to ADDM using the ASH to achieve its objectives, the ASH contents will also be displayed in the Oracle Enterprise Manager (EM) performance screen.



Please note that those 3 tools require licenses. As an alternative, you can use Stats Pack and SQL Trace/TKPROF.


Further information is available via



Support Tools SQLT and SQLHC


Oracle Support offers several support tools that can be downloaded for free from My Oracle Support. We want to focus in this update on two of those tools related to diagnosing and identifying performance issues: SQLT and SQLHC.


In short, SQLT, also known as SQLTXPLAIN, inputs one SQL statement and outputs a set of diagnostics files. These files are commonly used to diagnose SQL statements performing poorly.  SQLT connects to the database and collects execution plans, Cost-based Optimizer CBO statistics, schema objects metadata, performance statistics, configuration parameters, and similar elements that influence the performance of the SQL being analyzed.



SQLT provides pretty much everything that is needed to perform a SQL Tuning analysis and more. SQLT does not gather application data, but it does gather many pieces of metadata which, besides helping in the diagnostics phase of a SQL Tuning issue, may also allow the creation of a Test Case on a separate system for further investigation.



SQLT can be used if you want to perform the SQL Tuning effort yourself, or you may have been asked to use SQLT to provide the outputs to Oracle Support as part of a Service Request. You may even want to use it pro-actively since it presents an "Observations" section as part of the main report, which includes a comprehensive set of health-checks in and around the SQL statement that is being analyzed. This section can be considered as a "heads-up" and you may want to act on some of the observations and retry the execution of your SQL, especially if they involve CBO Statistics.



SQLT has several main execution methods:

  • XTRACT: Finds SQL in memory and/or AWR for analysis. This is the most used method to start an analysis. It requires the SQL_ID or the HASH_VALUE of the SQL statement in question, which can be found in SQL Trace file, an AWR or a StatsPack report.
  • XECUTE: Executes SQL from a text file to produce a set of diagnostics files
  • XTRXEC: Combines the features of XTRACT and XECUTE executing both methods serially
  • XPLAIN: based on the EXPLAIN PLAN FOR command and executed on SQL from a text file


The generated diagnostics output zip file has the name of, where NNNNN is a unique number with 5 digits. First, review the file named sqlt_sNNNNN_main.html. This file, together with a SQL Trace and its TKPROF, provides a clear picture of the SQL statement being analyzed. SQLT provides the following info for one SQL:

  • all know Execution Plans
  • CBO Schema Statistics
  • System Statistics
  • CBO Parameters

and a large list of other pieces of information that may be useful during the SQL Tuning analysis.



The SQL Tuning Health-Check Script  (SQLHC) is used to check the environment in which a single SQL Statement runs by checking Cost-based Optimizer (CBO) statistics, schema object metadata, configuration parameters and other elements that may influence the performance of the one SQL being analyzed. It does this while leaving "no database footprint" ensuring it can be run on all systems.


When executed for one SQL_ID, this script generates an HTML report with the results of a set of health-checks around the one SQL statement provided. You can find the SQL_ID of a statement from an AWR or ASH report or you can select it from the database using the V$SQL view.


Health-checks are performed over:

  • CBO Statistics for schema objects accessed by the one SQL statement being analyzed
  • CBO Parameters
  • CBO System Statistics
  • CBO Data Dictionary Statistics
  • CBO Fixed-objects Statistics


Have you had a chance to check out the documents below? Get moving and




SRDC - Service Request Data Collections


When investigating performance issues, it is important to start with the appropriate set of data. To that end, we have created a series of Oracle Service Request Data Collection (SRDC) articles to collect data for the most common performance issues.


Currently we have articles to collect diagnostics data for

  • slow database performance
  • hangs
  • slow SQL performance
  • errors (ORA-00600/ORA-07445/ORA-00054/ORA-00060/ORA-12751)
  • locking
  • contention for various wait events
  • SQL Plan Management
  • Database Testing: Capture and Replay

and more ...


SRDCs are particularly beneficial because they collect data targeted to a specific problem area and include all the details in one place. When you encounter an issue, the SRDC guides you what diagnostics are going to be required and hence you do not have to wait for a confirmation from a support engineer, which in return saves you time.

Also, since the diagnostics information is standardised, it can be pre-processed and important information is identified and allows to address your issue more quickly.


The following article provides the list of performance related SRDCs:

In addition to diagnostics, each SRDC also provides links to troubleshooting articles tailored to the issue at hand.


SRDCs are used to define what information to collect for various different problem types with the goal that support engineers have all relevant information to commence work on an issue straight away. Some issues only require a generic set of diagnostics whereas others use specifically targeted SRDCs to collect more detailed information.

In case the issue you are facing is not on the list, use the Database Performance Problem SRDC to collect standard performance information


Make some time and get accredited



We hope you enjoyed Oracle Database performance blog series.  Some of the key points for Database Performance best practice are

  • Agree on baseline - record an official acceptable performance service in the service level agreement
  • Find worst bottleneck- find/fix problem - evaluate performance - start again or is the baseline achieved?
  • Evaluate the performance tools from the Oracle Performance Toolkit and decide which ones are best suited for your environment - familiarize yourself with the tools before a performance issue arises
  • Prevention is always better than cure - even for your database environment !
  • Know how to collect the appropriate information when logging a SR

Last but not least