Skip navigation
1 2 3 Previous Next

Database Support Blog

39 Posts authored by: AnkeG-Oracle
orachk_logo.jpg

Do you struggle with some of these problems?

  • If you have an outage the impact to on your business can be huge
  • How can you detect risks early to minimize business impact?
  • There are so many different problems, how do you know which ones occur most often and will impact you most?
  • You have a lot of systems and products, how do you manage risks effectively across your enterprise?

 

ORAchk has been designed to address these challenges, with the following capabilities:

  • Automated risk identification and proactive notification before business is impacted
  • Health Checks based on most impactful reoccurring problems across Oracle customer base
  • Lightweight & non intrusive Health Check framework for the Oracle stack
  • Runs in your environment - no need to send anything to Oracle
  • Scheduled email Health Check reports with drill down capability

 

The rest of this article will give you a quick overview of how to use ORAchk to reduce your risk. This was also covered in the My Oracle Support Web Cast How To Use ORAchk to Reduce Your Risk (Recording | pdf )

 

Topics covered:

 

 

Products & Environments Covered by ORAchk

 

ORAchk is supported on the following platforms:

 

  • Linux x86-64 (zLinux, OEL, RedHat and SuSE 9, 10, 11)
  • Oracle Solaris (SPARC and x86-64)
  • IBM AIX
  • HP-UX
  • MS Windows x64 (2008 and 2012, Cygwin Required)

 

ORAchk is written in BASH and requires BASH 3.2 or higher installed, which is not installed by default on AIX or HP-UX, but is available for download for free. Windows requires Cygwin Linux emulator.

The aim of ORAchk is to provide one Health Check tool for the entire Oracle stack.

 

As of release 12.1.0.2.4 ORAchk has checks for:

                                                                    

Oracle Database (versions 10gR2, 11gR1, 11gR2 & 12cR1)

  • Standalone Database
  • Upgrade Readiness Validation
  • Grid Infrastructure & RAC
  • Maximum Availability Architecture (MAA) Validation
  • Golden Gate
    

Enterprise Manager Cloud Control (12c only)

  • Repository
  • Agents
  • OMS (version 12.1.0.1 and above on Linux only)
  

Oracle Hardware Systems

  • Oracle Solaris
  • Oracle Systems configuration for Oracle Database, Oracle Middleware & Oracle Applications

E-Business Suite

  • Oracle Payables (R12 only)
  • Oracle Workflow
  • Oracle Purchasing (R12 only)
  • Oracle Order Management (R12 only)
  • Oracle Process Manufacturing (R12 only)
  • Oracle Fixed Assets (R12 only)
  • Oracle Human Resources (R12 only)
  • Oracle Receivables (R12 only)
  

Oracle Siebel

  • Oracle Siebel verification of the database configuration for stability, best practices and performance optimization (Siebel 8.1.1.11 connecting to Oracle Database 11.2.0.4.)

 

If you have an Oracle Engineered System, the ORAchk functionality is also available via the Exachk Script.

          

Where to get ORAchk

 

ORAchk is available from a number of different locations and bundles:

  • Shipped with the Database from versions 11.2.0.4 and 12.1.0.2 (Default location is $ORACLE_HOME/suptools/orachk)                      
  • Shipped with Database PSUs from Jan 2015                      
  • Included within TFA / Support tools bundle                     
  • Main ORAchk Document 1268927.2                 

 

 

ORAchk Installation

 

Installation is simple, all you need to do is unzip the orachk.zip file, it takes less than a minute.

ORAchk evolved from RACcheck and as such is RAC aware. If you are installing on a RAC system you only need to put it on one node, ORAChk will discover the other nodes and diagnose them remotely.

For standalone database instances ORAchk needs one install per host, or for ease of deployment mount it on a shared file system accessible to each host.

 

 

ORAchk Auto Update

 

ORAchk provides automatic update options to help you stay up to date with the latest health checks.

 

Option 1: If you have connection from your environment to My Oracle Support

  • When ORAchk is older than 120 days it will prompt you to let it automatically download newer version from My Oracle Support
  • Can also be specifically triggered with:     ./orachk -download

 

Option 2: If you have no direct connection from your environment to My Oracle Support

  1. Download the latest orachk.zip to a shared network staging location
  2. Set environment variable RAT_UPGRADE_LOC to point to staging location
  3. Next time orachk script is started it will prompt to allow it to upgrade itself

 

Sample Problem Scenario

 

Take an example problem scenario without ORAchk:

prob-example-no-orachk.jpg

  1. The DB_FILE_MULTIBLOCK_READ_COUNT parameter is not set correctly
  2. This leads to the optimizer favoring large I/Os and having a negative effect of the performance of queries
  3. You see the slow query performance and open a Service Request with Oracle Support
  4. It takes time for Oracle Support and you to work together to troubleshoot, find the cause and resolve the issues
  5. This can all have a negative impact on the rest of your business

 

Now look at the same example problem scenario with ORAchk installed:

prob-example-with-orachk.jpg

 

  1. ORAchk proactively sends you regular reports of issues found, which would include warning you about the DB_FILE_MULTIBLOCK_READ_COUNT parameter
  2. So you read the recommendation in the report and the supporting knowledge documents on My Oracle Support and understand how to resolve the problems
  3. You are able to proactively resolve the root cause of potential performance issues before they have had a negative impact
  4. The business continues as normal with no complaints

 

 

Recommended ORAchk Usage Methodology

 

The recommended ORAchk usage is:

  1. Schedule ORAchk to run in daemon mode on specified interval and email report
  2. Identify actions easily by viewing reports and automated comparison of previous runs
  3. Act on recommendations by following the My Oracle Support links

 

How to Schedule ORAchk

schedule-orachk.jpg

 

This shows a simple example of how to schedule monthly ORAchk reports.

You set the AUTORUN_SCHEDULE parameter, telling the ORAchk daemon when the checks should be run:

  • Hour (from 0 to 23)
  • Day of the month
  • Month of the year
  • Day of the week

Wild cards tell it to use every value, so in this first example we tell ORAchk to run at 3am on the 28th of the month every month, no matter what day of the week.

 

You can set a more complex schedule, specifying multiple values for each of the time fields if necessary, for example:
  orachk -set `AUTORUN_SCHEDULE=8,20 * * 2,5'

This tells ORAchk to run on the 8th and 20th hour, every day, every month on day 2 and 5 - meaning it will run every Tue and Fri at 8am and 8pm.

 

Use the NOTIFICATION_EMAIL to tell ORAchk the email address where to send the report, this can be multiple comma separated addresses if needed.

 

Then we start the daemon with:
  ./orachk -d start

 

 

Viewing the ORAchk Report

report-summary.png

When ORAchk runs at the scheduled time it will send you through an html report of the output. This will contain:

  • A High level health score based on the number of checks which passed or failed
  • A summary of the run, showing things such as where and when it was run, which version was used, how long it took, which user it was run as etc
  • The table of contents, which provides easy access to findings. This table of contents will change based on the environment, as ORAchk is smart enough to figure out which Oracle products you have installed and will only run checks for products installed.
  • You will also see the Findings themselves and the recommendations for how to resolve any issues found.

 

The findings section of the report shows you a table with a row corresponding to each check:

report-findings.jpg

 

In the above example, the first finding shows we got a warning based on an SQL check which tells us we should consider unsetting the database parameter DB_FILE_MULTIBLOCK_READ_COUNT, this was found on all databases tested.

 

We can drill down into view the details, which will show us something similar to this:

report-recommendation.png

This gives:

  • A more detailed explanation of the problem.
  • Recommendation for what you should do next.
  • Links to supporting documents, such as those on My Oracle Support Knowledge.
  • It shows where the check failed and anywhere it passed.
  • Then it will also show you why the check failed.

 

ORAchk will only run checks relevant to what you have installed on the machine.

 

How to Exclude Checks

You should aim to address all issues which are reported by ORAchk. Problems should stand out very clearly in the report, you should not have a situation where your team says "yes we know about that, we ignore that part of the report", as this would make it much easier to miss important problems.

Although the checks in ORAchk are based on the top impactful problems seen to Oracle Customer Support and recommendations are based on what is generally the best setting. It may be that in your particular situation you know the issue described is not a problem, in which case it’s best practice to exclude these checks from being run. See the FAQ "How to exclude certain checks from ORAchk" on the ORAchk Document 1268927.2

 

 

Profiles and Adhoc Usage of ORAchk

 

If you want to run just a subset of the checks, for example if you have an environment with both the Database and EM on the same host but you only want to run the EM checks you can use ORAchk profiles.

Profiles are logical groupings of checks that are about similar problem areas such as em, dba, sysadmin or clusterware etc.

You can run orachk with only checks in a particular profile by using:
  ./orachk -profile <profile_name>

 

New profiles get added as and when new groups of checks are added, so the full list may change with each release, however you can see a full list of available profiles by simply using the –h (for help) option:
  ./orachk -h

 

If you’re planning to upgrade the database you can run
  ./orachk -u -o pre

 

ORAchk will prompt you for which version of the database you're going to upgrade to and will then run checks specific to problems you might encounter doing that upgrade.

 

Once you’ve done the database upgrade you can again use ORAchk to verify everything, by doing:
  ./orachk -u -o post

 

 

ORAchk Collection Manager

 

ORAchk Collection Manager is a companion Application Express (APEX) application. When you have many systems it can be difficult to manage these on a system by system basis.

The Collection Manager provides a dashboard in which you can track your ORAchk collection data in one easy to use interface.

 

You can use the Collection Manager to:

  • Monitor by Business Unit
  • Monitor systems within Business Units, by DBA Manager and DBA
  • Spot trends by finding the most frequently Failings or Warnings
  • View Results for automatic comparison between most recent and last collections per system
  • See incidents created for tracking correction of issues
  • Browse the collections by various filter criteria

 

The ORAchk Collection Manager is installed on a single system but serves as a enterprise-wide repository of all ORAchk collections.  The ORAchk collections can be configured to upload to the repository automatically.

ORAchkCM.png

 

 

ORAchk Community

 

ORAchk typically has a quarterly release cycle, which allows us to quickly get out new checks as top problems are found by Oracle Support. It also allows us to quickly fix any bugs found in ORAchk and build in new features. The ORAchk development team have a particular focus on trying to accommodate features requested by customers.

So any time you have any feedback on the content or features of ORAchk, we'd love to hear from you with any problems or suggestion you have via the ORAchk community.

 

 

More Information about ORAchk

 

For further information about ORAchk

 

 

Gareth-Chapman-Oracle

The new ORAchk release 12.1.0.2.4 is now available to download.


New Features in ORAchk 12.1.0.2.4

 

ORAchk-12_1_0_2_4.jpg   

Auto update ORAchk when newer version is available

New in this release, if ORAchk is older than 120 days and a newer version is not available locally it will check to see if a newer version is available on My Oracle Support and automatically download and upgrade.

Download of latest version directly from My Oracle Support can also be specifically triggered with “./orachk –download”.

If ORAchk is running in automated mode the daemon will automatically upgrade from local location defined by RAT_UPGRADE_LOC just before the next scheduled run. Email notification will be sent about the upgrade then ORAchk will continue with the scheduled run using the upgraded version, all without requiring you to restart the ORAchk daemon.


Expanded Oracle Product Support

ORAchk 12.1.0.2.4 now brings wider and deeper support throughout the Oracle product stack, with newly added support for the following product areas:

  • Enterprise Manager OMS
  • E-Business Suite Oracle Fixed Assets
  • E-Business Suite Oracle Human Resources
  • E-Business Suite Oracle Receivables
  • Siebel CRM Application

See Document 1268927.2 for further details of the new product support.

 

Over 60 New Health Checks

This release of ORAchk adds new checks for some of the most impactful problems seen to Oracle Customer Support specifically in the areas of:

  • Systems hardware settings to optimize encryption performance for the Database and E-Business Suite.
  • Solaris & Siebel CRM Object Manager to ensure page sizes are set appropriately for Siebel CRM to handle large numbers of users.
  • Database optimization of memory and resource related configurations and Application Continuity checks.
  • Enterprise Manager OMS High impact problems that cause functional failure or difficulty with patching or upgrade.
  • E-Business Suite Receivables detection of non-validated Receivables Accounting Definitions, which might prevent the Create Accounting process from functioning.
  • E-Business Suite Fixed Assets checks for any books with an errored or incomplete depreciation run, to allow for resolution prior to month end close.
  • E-Business Suite Human Resources verification of Setup Business Group configuration.
  • Siebel Applications verification of the database configuration for stability, best practices and performance optimization.

For more details and to download the latest release of ORAchk see Document 1268927.2

The new ORAchk release 12.1.0.2.5 is now available to download.

 

New Features in ORAchk 12.1.0.2.5

 

Create your own checks

Would you find it useful to easily run your own environment or business specific checks?

Now you can use ORAchk to write, execute and report on your own custom user defined checks.   

  1. Use the Collection Manager interface to write your checks
  2. ORAchk’s built-in email notification and diff reporting
  3. View the results of your checks in the new “User Defined Checks” section in the ORAchk html report
  4. Collect and easily view all results enterprise wide using the ORAchk Collection Manager

 

Expanded Oracle Product Support

ORAchk 12.1.0.2.5 now brings wider and deeper support throughout the Oracle product stack, with newly added support for the following product areas:

  • Oracle Identity Manager
  • Oracle E-Business Suite Customer Relationship Management
  • Oracle E-Business Suite Project Billing
  • Oracle ZFS Storage Appliance
  • Oracle Virtual Networking
  • Oracle PeopleSoft  Applications
  • Application Continuity

 

Easily Run or Exclude Specific Individual Checks

New command line switches have been added so you can easily exclude or run only specific checks.

 

Shorter ORAchk Execution Time

ORAchk now has the ability to cache discovered databases, meaning database rediscovery is skipped and execution time is faster. If databases change you can simple issue one command to have ORAchk refresh the cache.

 

Support for Custom EBS APPS User

ORAchk will now dynamically determine the APPS user, enabling all EBS checks to be used even if you have changed the name of the APPS user.

See Document 1268927.2 for further details of the new product support.

 

Over 100 New Health Checks

This release of ORAchk adds more than 100 new checks for some of the most impactful problems seen to Oracle Customer Support specifically in the areas of:

  • Database best practices for patching, performance, scalability and high availability.
  • Oracle Identity Management pre-install settings, post-install configuration and runtime environment.
  • E-Business Suite best practice configuration settings for Human Resources and CRM (forecast, payment worksheets and service contracts).
  • E-Business Suite early detection of incompatible configurations in Project Billing.
  • Oracle ZFS Storage Appliance best practice configuration for performance and resilience.
  • Oracle Virtual Networking best practice configuration settings.
  • PeopleSoft Applications database best practices.

For more details and to download the latest release of ORAchk see Document 1268927.2

 

Imagine you encounter an issue with your database and you need to collect relevant trace and diagnostics files, maybe even across several nodes. Sounds familiar? If so, you probably also know how time consuming and complex this task can be.

 

Well, do you know that there is a tool that can do this task for you? It is called Trace File Analyzer Collector or short TFA and it is applicable to both RAC and non-RAC databases.

 

As systems supporting today's complex IT environments become larger, more complex and numerous, it can be difficult and time consuming for those managing these systems to know which diagnostics to collect.  In addition, systems might be distributed among different data centers. Hence, when an issue occurs, it may take several attempts and iterations before all relevant diagnostics files are uploaded to Oracle Support  for troubleshooting.

 

Before we go into the details of TFA, let's have a look at a traditional diagnostic lifecycle:

Traditional_diagnostic_lifecycle.png

 

Trace File Analyzer Collector (TFA) addresses exactly this issue. It is designed to solve these problems by providing a simple, efficient and thorough mechanism for collecting ALL relevant first failure diagnostic data in one pass as well as useful interactive diagnostic analytics. TFA is started once and collects data from all nodes in the TFA configuration.  As long as the approximate time that a problem occurred is known, one simple command can be run that collects all the relevant files from the entire configuration based on that time. The collection will include Oracle Clusterware, Oracle Automatic Storage Management and Oracle Database diagnostic data as well as patch inventory listings and operating system metrics.

TFA_collector_process.png

Business Drivers for Implementing TFA

 

Four key business drivers should be considered in a decision to implement Trace File Analyzer Collector.

 

  • Reduced Costs
    Any problem that leads or may lead to service interruption costs money and binds resources.  The faster a failed service can be restored the sooner normal operation can be resumed and IT staff can get back to more productive activities. Spending time on collecting and uploading diagnostic data over and over again due to the lack of knowledge about what data is needed or because of time pressure, the resolution cycle is lengthened.  TFA can help compress this aspect of the cycle.
  • Reduced Complexity
    Collecting the right data is crucial to efficient analysis of diagnostic data. In moments of distress collecting the relevant data is often replaced by collecting all, which also includes unnecessary diagnostic data. Analyzing unnecessary data means extra circles to determine what was provided. TFA helps eliminate those extra cycles by providing a standardized way of collecting the relevant diagnostic data across all Oracle environments, supporting nearly all configurations.  TFA also comes bundled with the Support Tools Bundle which consists of tools often requested by Oracle Support for collecting additional diagnostics. A convenient command line interface is provided for invoking those tools.  This approach eliminates the need for users to download and install them separately.
  • Increased Quality of Service
    Using TFA’s pruning algorithms applied at the time of the collection and upload phase allows Oracle Support to provide much quicker responses and more valuable support. Using TFA’s one file per host transfer accelerates root cause analysis by avoiding multiple communications between Oracle Support and the customer.
  • Improved Agility
    Using Trace File Analyzer Collector means not having to worry about what data to collect. Once setup and configured it can either be run on a automated event driven basis or on demand, leaving enough time for IT personnel to focus on more productive tasks.TFA enables first level production support personnel to upload diagnostic data just as efficiently and as precisely as senior support staff .

 

 

 

TFA Architecture and Configuration Basics

 

TFA is implemented by way of a Java Virtual Machine (JVM) that is installed on and runs as a daemon on each host in the TFA configuration.  Each TFA JVM communicates with peer TFA JVMs in the configuration through secure sockets.  A Berkley Database (BDB) on each host is used to store TFA metadata regarding the configuration, directories and files that TFA monitors on each host.

 

Supported versions (as of Sept. 2015) of Oracle Clusterware, ASM and Database are 10.2, 11.1, 11.2, 12.1.  TFA is shipped and installed in any 11.2.0.4 and 12.1.0.2 Grid Infrastructure Home. If using any other version, TFA must be downloaded and installed separately following the instructions in My Oracle Support Note 1513912.1

 

Supported platforms (as of Sept. 2015) are Linux, Solaris, AIX, HP-UX

 

Supported Engineered Systems (Sept. 2015) are Exadata, Zero Data Loss Recovery Appliance (ZDLRA) and the Oracle Database Appliance (ODA).  Support on engineered systems includes database and storage servers.

 

Use the My Oracle Support (MOS) document  to download and monitor for updates on version and platform support.

 

For supported versions other than 11.2.0.4 and 12.1.0.2, Oracle recommends that TFA be downloaded from the My Oracle Support document and installed as part of a standard build.  Oracle also recommends that the version distributed with 11.2.0.4 and 12.1.0.2 be upgraded to the latest version downloadable from My Oracle Support.

 

Please note, that TFA is also useful for  non-RAC single instance database servers as well and can be  downloaded from the My Oracle Support document mentioned above and installed manually.

 

A simple command line interface (CLI) called tfactl is used to execute the commands that are supported by TFA.  The JVMs only accept commands from tfactl. tfactl is used for monitoring the status of the TFA configuration, making changes to the configuration and ultimately to take collections.

 

TFA is a multi-threaded application and its resource footprint is very small.  Usually the user would not even be aware that it is running.  The only time TFA will consume any noticeable CPU is when doing an inventory or a collection and then only for brief periods.

 

Collections are initiated from any host in the configuration. Here’s an example which would perform a cluster-wide collection for the last 4 hours for all supported components:

$ tfactl diagcollect

 

The initiating host’s JVM communicates the collection requirement to peer JVMs on other hosts if files are needed from other hosts in the configuration. All collections across the configuration are run concurrently and copied back to the TFA repository on the initiating node.  When all collections are complete, the user can upload them from a single host to an Oracle Service Request.

 

The TFA repository can be configured on a shared filesystem or locally. In case a shared file system is used, the collection result of each host’s collection is stored in specific subdirectories as soon as the collection is complete.  If the TFA repository is configured locally instead (on each host) each collection result is copied to the initiating host’s repository before being deleted from each remote host.  In either case it is convenient for the user to obtain the files needed for a Service Request from a single location that is listed at the end of the output for each collection.

 

Managing TFA is simple. For example, adding or removing nodes to or from the TFA configuration after initial install is simple and only required as configuration such as the cluster topology changes.  New databases on the other hand that are added to a configuration will be discovered automatically.

 

Under normal operating conditions TFA spawns a thread to monitor the end of each alert log in the configuration – Database, ASM, and Clusterware.  TFA monitors for certain events such as node or instance evictions, and certain errors such as ORA-00600, ORA-07445, etc.  TFA stores metadata about these events in the BDB for future reference.

 

TFA runs an auto-discovery and a file inventory periodically to keep up to date on databases and diagnostic files.  A file inventory is kept for every directory registered in the TFA configuration and metadata about those files is maintained in the BDB.  Examples of the metadata kept are file name, first timestamp, last timestamp, file type, etc.  Differing timestamp formats are also “normalized” as the format of timestamps can vary from one file type to the next.

 

 

Functionality Matrix

 

Version

System Type

Comments

10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3, 12.1.0.1RACDownload latest version from Document 1513912.1
11.2.0.4RACInstalled by default in $GI_HOME/tfa, $GI_HOME/tfa/bin/tfactl (tfactl -h for options)
12.1.0.2RACInstalled by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options)
11.2.0.4 JUL2014 PSU upwardsRACInstalled by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options), Analytic functionality added (tfactl -analyze -h for options)
11.2.0.1,11.2.0.2,11.2.0.3, 12.1.0.1ExadataDownload latest version from Document 1513912.1
11.2.0.4ExadataDatabase Server collection support only
11.2.0.4.6 Bundle Patch (APR2014 PSU)ExadataStorage Server collection support added
11.2.0.4.9 Bundle Patch (JUL2014 PSU)ExadataAnalytic functionality added (tfactl -analyze -h for options)
12.1.0.2ExadataInstalled by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options), Database and Storage Server collections and analytic functionality

 

 

Support Tools Bundle

 

TFA includes tools commonly requested or recommended by Oracle Support as of version 12.1.2.3.0.  The tfactl command interface is used to manage the tools.  Users can start, stop, status and deploy the tools in a standard way using tfactl commands and without needing to download and install them all separately as in the past.  Using the bundled tools from within TFA allows for a standard command interface, deployment locations and collection of output by TFA.

 

NOTE:  Oracle recommends that TFA Collector be kept at the latest version using downloads from My Oracle Support Document 1513912.1 which is updated frequently in order to make new features and other improvements available.

 

 

 

TFA  Collections

 

TFA only collects files that are relevant considering  the time of the problem. All the user has  to know is the approximate time of the problem  which can then be provided using a time modifier in the CLI.  TFA will then gather all the first failure  diagnostic files Oracle Support  needs in order to triage the problem. The  simplest and most thorough form of collection is  for the default time period which  is for the  last 4 hours

Example #1 :  $ tfactl diagcollect

 

Alternatively, if a more precise time is known, for  example a problem which occurred within the last hour

Example #2: $  tfactl diacollect -since 1h


The command shown in Example #1  above initiates a TFA run collecting all the relevant diagnostic data for an  issue that occurred within the last four hours.  This command will perform a configuration-wide  collection of all OS, Clusterware, ASM, Database, Cluster Health Monitor and OS  Watcher files that are relevant to that time, because no further  restriction such as a specific node or component are mentioned. In  addition, TFA will collect  other relevant files and configuration data such as patch inventories. Similarly the command shown in Example #2  would collect all the files modified within the last hour, most likely  resulting in a smaller and more targeted collection.

 

tfactl provides a series of command line options to  limit the collection to a subset of hosts or components.  However, if the root  cause is completely unknown, it might be best to simply collect as much data  during the relevant time as possible, so nothing  will be missed. Reducing  the data to collect is not as efficient as being precise about the time of the  incident. The more precisely the time of the problem can be  specified the smaller the collection will be. Likewise, the more specifically a  problem can be narrowed down the more targeted a  collection can be in terms of nodes and components.

 

The TFA inherent pruning of larger  files by skipping data from traces and logs that is outside of the time  specified is another approach to make data collection as  well as analysis more efficient. Again, the more precisely the time can be specified the more the files  can be pruned. The design goal of TFA is  to make the collections as complete, as relevant and as small as possible to  save time in collecting, copying, uploading and  transferring files for both the customer and for Oracle Support,  following a simple yet very efficient approach;  a file that was not modified around  the time of the incident is unlikely  to be relevant.

 

When diagnostic collections are taken the  first step taken automatically is  a "pre-collection" inventory in case any files have been  modified or created since the last periodic inventory. In the case of a pre-collection inventory only files for the specified databases and/or components are inventoried.
 
TFA can be  configured to take collections automatically when registered events  occur and store them in the repository. Examples  of registered events would be node evictions and ORA-00600 errors among others  documented in the TFA User Guide. There  is a built-in flood control mechanism for the  auto-collection feature to prevent a rash of duplicate errors or events from  triggering multiple duplicate collections. Should the repository reach its configurable maximum size TFA will  suspend taking collections until space is cleared in the repository using the purge  command.

Example #3: # tfactl  purge -older 30d

In example #3 any collections older than 30 days  will be deleted thereby freeing space for new collections. Note that the purge command must be run by  root or under sudo control.

 

 

 

TFA Analytics

 

TFA has built-in analytics  enabling  the user to  summarize the results from Database, ASM and  Clusterware alert logs, system message files and  OS Watcher performance metrics  across the configuration. Errors,  warnings and search patterns can be summarized for any given time period for  the entire configuration without needing to access the files  on each node of the configuration. In a  similar way, OS Watcher data can also be summarized for the  entire configuration for any given time period.

 

 

Further Information

 

For further information, please refer to

 

For any help and support with TFA, please come and chat to us via the RAC Community.





In this Blog:

 

 


September is Oracle Database Performance month


September is Oracle Database performance month for our blog series. Throughout the month, we will highlight some of the great performance tools we recommend for your database toolkit.

 

Why should you read this blog and revisit it during September ? Well, there are lost of reasons

  • Take the lead in your team to quickly and easily identify heavy loads or performance issues on your system and put action plans in place to improve the performance
  • Create your own performance toolkit
  • Recognise potential performance issues before they arise
  • Deal with performance issues confidently
  • Ensure all diagnostic data is collected for the relevant time frame when engaging Oracle Support

 

Go ahead, read on and find out for yourself !

 

You might also consider completing the Oracle Support Database Accreditation to increase your expertise with tools and best practices to support your Oracle Database. The accreditation also includes a focus on the performance tools we will highlight throughout September. What do the coolest database professionals have in their tool kits? Let’s find out.

 

For further details about Oracle Support Database Accreditation and to learn more about Performance Tuning and Diagnostics

 

Performance Toolkit.png


 

Database Performance Tools ADDM, AWR and ASH

 

Performance tuning and problem diagnosis are the two most challenging and important management tasks that any database administrator performs. Hence, let’s start our performance series with some of the tools that are delivered with the database:

 

AWR


The Oracle database automatically collects and stores workload information in the Automatic Workload Repository (AWR). AWR offers key information (e.g. wait classes, metrics, OS stats) and the repository is maintained automatically and html reports can be produced.

 

AWR can be used to identify:

  • SQLs or Modules with heavy loads or potential performance issues, symptoms of those heavy loads (e.g. logical I/O (buffer gets), Physical I/O, contention, waits)
  • SQLs that could be using sub-optimal execution plans (e.g. buffer gets, segment statistics), numbers of executions, parsing issues.
  • General performance issues, e.g. system capacity (I/O, memory, CPU), system/DB configuration. SGA (shared pool/buffer cache) and PGA sizing advice.
AWR.png

 


ADDM


The Automatic Database Diagnostic Monitor (hereafter called ADDM) is an integral part of the Oracle RDBMS capable of gathering performance statistics and advising on changes to solve any existing performance issues measured. For this, ADDM uses the Automatic Workload Repository (AWR), a repository defined in the database to store database wide usage statistics at fixed size intervals (60 minutes).

 

To make use of ADDM, a PL/SQL interface called DBMS_ADVISOR has been implemented. This PL/SQL interface may be called directly through the supplied $ORACLE_HOME/rdbms/admin/addmrpt.sql script or used in combination with Oracle Enterprise Manager.

ADDM.png

 


ASH


Active Session History (ASH) acquires the active session’s activity information by sampling it from the database kernel’s session state objects. The quantity of information sampled by ASH could be quite voluminous, so ASH maintains a fixed-size circular buffer allocated during database start-up time in the database System Global Area (SGA).

 

The ASH data is periodically flushed to disk and stored in the Automatic Workload Repository (AWR) which we covered in the previous blog. The information can be used for drilldown purposes during problem diagnosis or performance tuning. In addition to ADDM using the ASH to achieve its objectives, the ASH contents will also be displayed in the Oracle Enterprise Manager (EM) performance screen.

ASH.png

 

Please note that those 3 tools require licenses. As an alternative, you can use Stats Pack and SQL Trace/TKPROF.

 

Further information is available via

 

 

Support Tools SQLT and SQLHC

 

Oracle Support offers several support tools that can be downloaded for free from My Oracle Support. We want to focus in this update on two of those tools related to diagnosing and identifying performance issues: SQLT and SQLHC.

 

In short, SQLT, also known as SQLTXPLAIN, inputs one SQL statement and outputs a set of diagnostics files. These files are commonly used to diagnose SQL statements performing poorly.  SQLT connects to the database and collects execution plans, Cost-based Optimizer CBO statistics, schema objects metadata, performance statistics, configuration parameters, and similar elements that influence the performance of the SQL being analyzed.

 

 

SQLT provides pretty much everything that is needed to perform a SQL Tuning analysis and more. SQLT does not gather application data, but it does gather many pieces of metadata which, besides helping in the diagnostics phase of a SQL Tuning issue, may also allow the creation of a Test Case on a separate system for further investigation.

 

 

SQLT can be used if you want to perform the SQL Tuning effort yourself, or you may have been asked to use SQLT to provide the outputs to Oracle Support as part of a Service Request. You may even want to use it pro-actively since it presents an "Observations" section as part of the main report, which includes a comprehensive set of health-checks in and around the SQL statement that is being analyzed. This section can be considered as a "heads-up" and you may want to act on some of the observations and retry the execution of your SQL, especially if they involve CBO Statistics.

Sqlxplain.png

 

SQLT has several main execution methods:

  • XTRACT: Finds SQL in memory and/or AWR for analysis. This is the most used method to start an analysis. It requires the SQL_ID or the HASH_VALUE of the SQL statement in question, which can be found in SQL Trace file, an AWR or a StatsPack report.
  • XECUTE: Executes SQL from a text file to produce a set of diagnostics files
  • XTRXEC: Combines the features of XTRACT and XECUTE executing both methods serially
  • XPLAIN: based on the EXPLAIN PLAN FOR command and executed on SQL from a text file

 

The generated diagnostics output zip file has the name of sqlt_sNNNNN.zip, where NNNNN is a unique number with 5 digits. First, review the file named sqlt_sNNNNN_main.html. This file, together with a SQL Trace and its TKPROF, provides a clear picture of the SQL statement being analyzed. SQLT provides the following info for one SQL:

  • all know Execution Plans
  • CBO Schema Statistics
  • System Statistics
  • CBO Parameters

and a large list of other pieces of information that may be useful during the SQL Tuning analysis.

 

 

The SQL Tuning Health-Check Script  (SQLHC) is used to check the environment in which a single SQL Statement runs by checking Cost-based Optimizer (CBO) statistics, schema object metadata, configuration parameters and other elements that may influence the performance of the one SQL being analyzed. It does this while leaving "no database footprint" ensuring it can be run on all systems.

 

When executed for one SQL_ID, this script generates an HTML report with the results of a set of health-checks around the one SQL statement provided. You can find the SQL_ID of a statement from an AWR or ASH report or you can select it from the database using the V$SQL view.

 

Health-checks are performed over:

  • CBO Statistics for schema objects accessed by the one SQL statement being analyzed
  • CBO Parameters
  • CBO System Statistics
  • CBO Data Dictionary Statistics
  • CBO Fixed-objects Statistics

 

Have you had a chance to check out the documents below? Get moving and

 

 

 

SRDC - Service Request Data Collections

 

When investigating performance issues, it is important to start with the appropriate set of data. To that end, we have created a series of Oracle Service Request Data Collection (SRDC) articles to collect data for the most common performance issues.

 

Currently we have articles to collect diagnostics data for

  • slow database performance
  • hangs
  • slow SQL performance
  • errors (ORA-00600/ORA-07445/ORA-00054/ORA-00060/ORA-12751)
  • locking
  • contention for various wait events
  • SQL Plan Management
  • Database Testing: Capture and Replay

and more ...

SRDC_Performance_small.png

SRDCs are particularly beneficial because they collect data targeted to a specific problem area and include all the details in one place. When you encounter an issue, the SRDC guides you what diagnostics are going to be required and hence you do not have to wait for a confirmation from a support engineer, which in return saves you time.

Also, since the diagnostics information is standardised, it can be pre-processed and important information is identified and allows to address your issue more quickly.

 

The following article provides the list of performance related SRDCs:

In addition to diagnostics, each SRDC also provides links to troubleshooting articles tailored to the issue at hand.

 

SRDCs are used to define what information to collect for various different problem types with the goal that support engineers have all relevant information to commence work on an issue straight away. Some issues only require a generic set of diagnostics whereas others use specifically targeted SRDCs to collect more detailed information.

In case the issue you are facing is not on the list, use the Database Performance Problem SRDC to collect standard performance information

 

Make some time and get accredited

 


Summary


We hope you enjoyed Oracle Database performance blog series.  Some of the key points for Database Performance best practice are

  • Agree on baseline - record an official acceptable performance service in the service level agreement
  • Find worst bottleneck- find/fix problem - evaluate performance - start again or is the baseline achieved?
  • Evaluate the performance tools from the Oracle Performance Toolkit and decide which ones are best suited for your environment - familiarize yourself with the tools before a performance issue arises
  • Prevention is always better than cure - even for your database environment !
  • Know how to collect the appropriate information when logging a SR
Performance_cycle.png

Last but not least

Updated: 15th December 2015

 

 

 

 

There have been some interesting announcements, updates and presentations during this year's OpenWorld 2015 in San Francisco. Check out the highlights and references outlined below related to Oracle Database.

 

Access to the starting point is via Oracle OpenWorld portal


openworld-sidebar-image-1.jpg

Keynotes
Keynotes are available as full length replays and 2-3 minutes overview highlights from Oracle Openworld 2015: On Demand Videos

  • Integrated Cloud: Application and Platform Services - Larry Ellison
  • Vision 2025: Digital Transformation in the Cloud - Mark Hurd
  • Design Tech High School & Gender Diversity - Safra Catz
  • Oracle Software Innovations - Thomas Kurian
  • The Secure Cloud - Larry Ellison

 

Focus On Documents

Focus On Documents provide a roadmap to sessions categorised by products and product area. Further details are available in the

 

Content Catalog

The Content Catalog offers search capabilities by keywords for sessions, speakers or demos or by selecting predefined filters on the left side. Many of the presentations are now available to download from the Session Catalog site.

How to Download:

  • From the Oracle OpenWorld homepage, navigate to the Session Catalog site.
  • Filter or search to find your session of choice.
  • Once your search result appears, click the plus sign to expand info about the session. If the speaker has uploaded the presentation, you will see a link to download a PDF or PowerPoint presentation.


Support Sessions

       

Title
Conference Id
Abstract
Deploying a Private Cloud with the Oracle Cloud Platform: Customer Case StudyCON1980Join this session to learn how a large Dutch government organization implemented a platform-as-a-service (PaaS) solution on Oracle engineered systems including multiple Oracle Exadata, Oracle Exalogic, Oracle Database Appliance, and Oracle’s ZFS systems. In parallel, this organization reduced IT resource requirements by relying on Oracle for the operational management of the installed infrastructure and Oracle software (Oracle Database 12c and Oracle’s middleware solutions).
Deploying a Private Cloud with Oracle Engineered Systems: Lessons from the TrenchesCON2656The private cloud is becoming the preferred deployment model for mission-critical applications. This session provides business and technical insight into some of the pragmatic considerations, key operational issues, integration strategies, and deployment options to make the transition to cloud successful. Hear how enterprises can couple adoption of the cloud with DevOps to double up on the benefits. Also included are customer case studies and lessons learned from large deployments.
Best Practices: SQL Tuning Made Easier with SQLTXPLAINCON8662SQL tuning can be a daunting task. Too many things affect the cost-based optimizer (CBO) when you’re deciding on an execution plan. CBO statistics, parameters, bind variables, peeked values, histograms, and more are common contributors. The list of areas to analyze keeps growing. Oracle has been using SQLTXPLAIN (SQLT) as part of a systematic way to collect the information pertinent to a poorly performing SQL statement and its environment. With a consistent view of this environment, an expert on SQL tuning can perform more diligently, focusing on the analysis and less on the information gathering. This tool can also be used by experienced DBAs to make their life easier when it comes to SQL tuning. Learn more in this session.
Best Practices for Maintaining Your Oracle Real Application ClustersCON8663You chose Oracle Real Application Clusters (Oracle RAC) to help your organization deliver superior business results. Now learn how to further enhance the availability, scalability, and performance of Oracle RAC by staying on top of the latest success factors and best practices developed by Oracle RAC experts. In this session, Oracle experts discuss proven best practices to help you work more efficiently, upgrade more easily, and avoid unforeseen incidents. Topics include how to keep Oracle RAC in check and simplify diagnostic collection.
Oracle Database 12c Upgrade: Tools and Best Practices from Oracle SupportCON8664You've heard about Oracle Database 12c and its new capabilities. Now join this session to hear from Oracle experts about all the great tools and resources Oracle offers to help you upgrade to Oracle Database 12c efficiently and effectively. Session presenters from Oracle Support bring years of database experience and share recent lessons learned from Oracle Database 12c upgrades at companies of all sizes worldwide. You are sure to leave with valuable information that will help you plan and execute your upgrade. And most of the tools and resources discussed are available to current customers at no additional cost through their standard product support coverage.
Performance Analysis Using Automatic Workload Repository and Active Session HistoryCON8667The Automatic Workload Repository and Active Session History features in Oracle Enterprise Manager 12c are two powerful tools for performance analysis that can quickly identify and confirm performance problems with Oracle Database. Join us for this session to hear about real case studies from Oracle experts. Learn how these tools, along with the DB Time model, can help you avoid wasting time guessing the causes of your database performance issues. If you are not familiar with Automatic Workload Repository and Active Session History features, this is a don’t-miss session.
Best Practices for Supporting and Maintaining Oracle ExadataCON8670Join this session to discover best practices for maintaining and supporting Oracle Exadata. Experts from Oracle Support are joined by customer representatives from Intel, Sherwin-Williams, and Travelers to share successes and provide actionable recommendations to maximize system availability and drive operating efficiencies. Attendees hear how to take full advantage of Oracle Platinum Services, a special entitlement for qualifying engineered systems.
Best Practices for Maintaining and Supporting Oracle Enterprise ManagerCON8671In this session, learn about best practices, tips, and tools for maintaining and getting the most out of Oracle Enterprise Manager. Experts from Oracle Support offer knowledge gained from working with Oracle customers worldwide. They look at patching, upgrades, issue resolution, and more. Specific topics covered include Oracle Enterprise Manager metrics and health checks, remote diagnostics, communities, and how to receive priority service request handling.
Maximize Your Investment in the Oracle Cloud PlatformCON8716You chose the Oracle Cloud Platform to help your organization deliver superior business results. Now learn how to take full advantage of your Oracle Cloud service with all the great tools and resources you’re entitled to with your subscription. In this session, Oracle experts provide proven best practices to help you realize more value faster from your Oracle Cloud service. New users and cloud experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps.
Hybrid Cloud Best Practices: Migrating to Public Cloud Platform as a ServiceCON9495Explore the various scenarios in which you can leverage the public cloud to enhance business operations including hybrid development and testing, integration between platform as a service (PaaS) and on-premises applications, and integration between PaaS and software as a service (SaaS). This session discusses migration strategies from an on-premises database and Oracle WebLogic-based applications to public cloud database as a service (DBaaS) and Oracle Java Cloud Service.
Deploying Database as a Service in Private and Hybrid Cloud Models—Best PracticesCON9496Join this session to hear how to rapidly deploy private cloud database as a service (DBaas) for Oracle Database 12c and Oracle Database 11g, and get expert tips on how to avoid many of the barriers to a successful implementation. For a hybrid model, hear how to best leverage Oracle Public Cloud platform as a service (PaaS). Get the highlights on Oracle Enterprise Manager 12c migration features and Oracle Advanced Customer Support automation tools.
Power Up to Oracle Database 12c: Upgrade Paths for Maximum ValueCON9497Discover the groundbreaking innovations of multitenancy, in-memory performance, and the advanced security features of Oracle Database 12c and how it provides the ideal foundation for cloud readiness. This session covers successful examples of upgrade deployments along the cloud journey, and highlights the benefits of Oracle Database lifecycle management and security best practices.
Protect Your Most Valuable Assets: Up-Level Your Database Security PracticesCON9499Historically, data security has been focused on perimeter security solutions. Protecting the database itself is often overlooked, despite being the second-most vulnerable part of the IT environment. To ensure the safest environment and prepare for any auditing activity, additional layers of security are recommended. Join this session to learn how to further safeguard your Oracle Database regardless of where you may be in your lifecycle, and discover Oracle best practices for database review and hardening and ongoing secure monitoring to help you better respond to potential threats before they happen.
Get Under the Hood with Oracle Exadata MonitoringCON10169In this session, learn how to quickly set up complete monitoring for your Oracle Exadata Database Machine. Our subject matter expert and global technical lead in Oracle Enterprise Manager and Oracle Exadata support shares knowledge gained from working with customer deployments worldwide. Specific topics covered include common challenges, best practices, and new features to get your complete Oracle Exadata Database Machine stack monitored using Oracle Enterprise Manager Cloud Control.

Surely, we've all been in a situation planning to apply a patch to the Database environment and it turned out that the patch conflicts with a patch that is already applied to the environment?

 

Well, good news - read the feature article Resolving patch conflicts with My Oracle Support Conflict Checker in Database Support News to find out how My Oracle Support Conflict Checker enables you to upload an OPatch inventory and then checks whether the selected patches conflict with those already applied to your environment:

 

  • If the tool finds "no conflicts", you can safely download the selected patches.
  • If conflicts are detected by the tool, the tool will then locate existing resolutions for you to download.
  • If no resolution is found, the tool offers you to request a new resolution and you can monitor the availability of the new patch conflict resolution in the Plans and Patch Request region.

 

For detailed information including demos of My Oracle Support Conflict Checker please review the recording  and/or  download the presentation of the Advisor Webcast - How to resolve patch conflicts with MOS Conflict Checker?

    Patch_Details_Conflict_Check.png

 

Download links can also be  found in document

 

    Document 1456176.1 Oracle Database Advisor Webcast Schedule and Archive recordings

 

Ask question and join the conversation in the related  Q&A community thread

Updated: 16th September 2015

 

Starting with the release of Oracle Database 12.1.0.2, Oracle Database Standard Edition 2 (SE2) has been released and is available for download from

 

For further information refer to

 

DB_SE2.jpg

 

Keywords: Oracle Database Standard Edition, SE, SE1, DB SE, 12.1.0.2, 12c, 12.1, SE2,

Let's start by asking what a community is. In the context of My Oracle Support Communities, I'd say it is a group of people who have the same interests, in our case an interest in Oracle Database product areas and features.

 

When people who share the same interest or passion come together, very soon lively discussions and dialogues start about newest features, offering tips on how to improve things, sharing ideas on how to approach a situation, asking for help with a tricky issue and the list goes on ....

 

Isn't this a great way to exchange information with peers, help each other and even better getting credit for it in form of points and badges?

 

Let's make the My Oracle Support Database community a great place for all of us - Big Thank you! to all participants, both customers and Oracle staff !

 

As a start, below are the top 3 most viewed threads from June for a handful of selected communities.

But there is much more to explore and a lot more Oracle Database communities across all the database areas. See for yourself - go ahead and start exploring the My Oracle Support Oracle Database Communities.

 

Database - RAC/Scalability (MOSC)

 

High Availability Data Guard (MOSC)

 

Storage Management (ASM ACFS DNFS ODM) (MOSC)

 

GoldenGate, Streams and Distributed Database (MOSC)

 

 

MOSC.png

A leap second is a second which is added to Coordinated Universal Time (UTC) in order to synchronize atomic clocks with astronomical time to within 0.9 seconds.

 

 

The next leap second will be added on June 30, 2015 23:59:60 UTC.

 

 

Review the Information Center below regarding Leap Second information across various Oracle product stacks:


  • Document 2019397.2 Information Center: Leap Second Information for All Products - Database - Exadata - Linux - Sun - Fusion - Middleware - EBS - JDE - Siebel - Peoplesoft
leap_second_date.png

Want to learn about five ways you can adjust your configuration to make the most of the Windows Server operating system resources, and hence avoid running your database out of memory:

  1. Increase the 2Gb Limit
  2. Implement Address Windowing Extensions (AWE)
  3. Reduce Per-Thread Memory
  4. Implement Dead Connection Detection
  5. Configure Oracle to Use Shared Servers

 

Check out the feature article Five Tips to Avoid Memory Issues Using 32-Bit Windows OS in Database Support News for details.

 

Ask question and join conversations Ask_Community.png

 

memory.png

Dave Meeson-Oracle

Many DBAs may be interested in the new feature Invisible Columns introduced in Oracle Database 12c Release 1, as it provides a means to make changes to a table without interrupting applications that use that table.

 

Check out this Database Support News feature article Oracle Database 12.1 New Feature: Invisible Columns

 

Ask questions and join conversations regarding this topic in the My Oracle Support Database Administration Community

 

O_Database12c_clr.gif

SusanA-Oracle

Check out this Database Support News feature article Oracle Net Services 12.1 New Features to learn about Database 12c Oracle Net Services architecture changes as well as some of the new features:

 

  • Overview of an architecture change
  • New Features
    • Larger Session Data Unit Sizes
    • Advanced Network Compression
    • New Implementation of Dead Connection Detection (DCD)
    • Intelligent Client Connection
    • Incident Generation for Process Failures
    • Valid Node Checking for Registration
    • Support of Oracle Home User on Windows

 

Ask questions and join conversations regarding this topic in this Database Networking Community discussion.

O_Database12c_clr.gif

Corina-Oracle

Understanding how OGG captures and delivers data will assist in the understanding on why certain issues occur and how to avoid them. Proper setup and configuration of OGG can significantly assist in problem resolution combined with how to log and trace these groups.

 

The feature article Oracle GoldenGate (OGG) Troubleshooting in the Database Support News, outlines common issues one may encounter with Oracle GoldenGate (OGG) and the steps and utilities to troubleshoot and resolve them.

 

View/download the recording of Oracle Database Advisor Webcast: Oracle GoldenGate Troubleshooting (select Archived 2015 tab)

 

Gank-Oracle

There is a lot of buzz about the Cloud and it can be quite confusing what it actually is.

 

Oracle has introduced its Database as a Service Cloud offering. There is a lot of information available - to get you started check out this Database Support News feature article Five Things You Need to Know About the Oracle Database Cloud Service

and read about the five important things everyone needs to know:

 

  • What is the Cloud
  • Choose your level of control and effort
  • Same Oracle Database software PLUS Cloud Tooling
  • Database Cloud - Public or Private
  • Special support

DBaaS.jpg

 

Roger K-Oracle

Welcome to the My Oracle Support Community! We highly encourage you to personalize your community display name to make your activity more memorable. Please see https://community.oracle.com/docs/DOC-1022508 for instructions.