Skip navigation

Imagine you encounter an issue with your database and you need to collect relevant trace and diagnostics files, maybe even across several nodes. Sounds familiar? If so, you probably also know how time consuming and complex this task can be.


Well, do you know that there is a tool that can do this task for you? It is called Trace File Analyzer Collector or short TFA and it is applicable to both RAC and non-RAC databases.


As systems supporting today's complex IT environments become larger, more complex and numerous, it can be difficult and time consuming for those managing these systems to know which diagnostics to collect.  In addition, systems might be distributed among different data centers. Hence, when an issue occurs, it may take several attempts and iterations before all relevant diagnostics files are uploaded to Oracle Support  for troubleshooting.


Before we go into the details of TFA, let's have a look at a traditional diagnostic lifecycle:



Trace File Analyzer Collector (TFA) addresses exactly this issue. It is designed to solve these problems by providing a simple, efficient and thorough mechanism for collecting ALL relevant first failure diagnostic data in one pass as well as useful interactive diagnostic analytics. TFA is started once and collects data from all nodes in the TFA configuration.  As long as the approximate time that a problem occurred is known, one simple command can be run that collects all the relevant files from the entire configuration based on that time. The collection will include Oracle Clusterware, Oracle Automatic Storage Management and Oracle Database diagnostic data as well as patch inventory listings and operating system metrics.


Business Drivers for Implementing TFA


Four key business drivers should be considered in a decision to implement Trace File Analyzer Collector.


  • Reduced Costs
    Any problem that leads or may lead to service interruption costs money and binds resources.  The faster a failed service can be restored the sooner normal operation can be resumed and IT staff can get back to more productive activities. Spending time on collecting and uploading diagnostic data over and over again due to the lack of knowledge about what data is needed or because of time pressure, the resolution cycle is lengthened.  TFA can help compress this aspect of the cycle.
  • Reduced Complexity
    Collecting the right data is crucial to efficient analysis of diagnostic data. In moments of distress collecting the relevant data is often replaced by collecting all, which also includes unnecessary diagnostic data. Analyzing unnecessary data means extra circles to determine what was provided. TFA helps eliminate those extra cycles by providing a standardized way of collecting the relevant diagnostic data across all Oracle environments, supporting nearly all configurations.  TFA also comes bundled with the Support Tools Bundle which consists of tools often requested by Oracle Support for collecting additional diagnostics. A convenient command line interface is provided for invoking those tools.  This approach eliminates the need for users to download and install them separately.
  • Increased Quality of Service
    Using TFA’s pruning algorithms applied at the time of the collection and upload phase allows Oracle Support to provide much quicker responses and more valuable support. Using TFA’s one file per host transfer accelerates root cause analysis by avoiding multiple communications between Oracle Support and the customer.
  • Improved Agility
    Using Trace File Analyzer Collector means not having to worry about what data to collect. Once setup and configured it can either be run on a automated event driven basis or on demand, leaving enough time for IT personnel to focus on more productive tasks.TFA enables first level production support personnel to upload diagnostic data just as efficiently and as precisely as senior support staff .




TFA Architecture and Configuration Basics


TFA is implemented by way of a Java Virtual Machine (JVM) that is installed on and runs as a daemon on each host in the TFA configuration.  Each TFA JVM communicates with peer TFA JVMs in the configuration through secure sockets.  A Berkley Database (BDB) on each host is used to store TFA metadata regarding the configuration, directories and files that TFA monitors on each host.


Supported versions (as of Sept. 2015) of Oracle Clusterware, ASM and Database are 10.2, 11.1, 11.2, 12.1.  TFA is shipped and installed in any and Grid Infrastructure Home. If using any other version, TFA must be downloaded and installed separately following the instructions in My Oracle Support Note 1513912.1


Supported platforms (as of Sept. 2015) are Linux, Solaris, AIX, HP-UX


Supported Engineered Systems (Sept. 2015) are Exadata, Zero Data Loss Recovery Appliance (ZDLRA) and the Oracle Database Appliance (ODA).  Support on engineered systems includes database and storage servers.


Use the My Oracle Support (MOS) document  to download and monitor for updates on version and platform support.


For supported versions other than and, Oracle recommends that TFA be downloaded from the My Oracle Support document and installed as part of a standard build.  Oracle also recommends that the version distributed with and be upgraded to the latest version downloadable from My Oracle Support.


Please note, that TFA is also useful for  non-RAC single instance database servers as well and can be  downloaded from the My Oracle Support document mentioned above and installed manually.


A simple command line interface (CLI) called tfactl is used to execute the commands that are supported by TFA.  The JVMs only accept commands from tfactl. tfactl is used for monitoring the status of the TFA configuration, making changes to the configuration and ultimately to take collections.


TFA is a multi-threaded application and its resource footprint is very small.  Usually the user would not even be aware that it is running.  The only time TFA will consume any noticeable CPU is when doing an inventory or a collection and then only for brief periods.


Collections are initiated from any host in the configuration. Here’s an example which would perform a cluster-wide collection for the last 4 hours for all supported components:

$ tfactl diagcollect


The initiating host’s JVM communicates the collection requirement to peer JVMs on other hosts if files are needed from other hosts in the configuration. All collections across the configuration are run concurrently and copied back to the TFA repository on the initiating node.  When all collections are complete, the user can upload them from a single host to an Oracle Service Request.


The TFA repository can be configured on a shared filesystem or locally. In case a shared file system is used, the collection result of each host’s collection is stored in specific subdirectories as soon as the collection is complete.  If the TFA repository is configured locally instead (on each host) each collection result is copied to the initiating host’s repository before being deleted from each remote host.  In either case it is convenient for the user to obtain the files needed for a Service Request from a single location that is listed at the end of the output for each collection.


Managing TFA is simple. For example, adding or removing nodes to or from the TFA configuration after initial install is simple and only required as configuration such as the cluster topology changes.  New databases on the other hand that are added to a configuration will be discovered automatically.


Under normal operating conditions TFA spawns a thread to monitor the end of each alert log in the configuration – Database, ASM, and Clusterware.  TFA monitors for certain events such as node or instance evictions, and certain errors such as ORA-00600, ORA-07445, etc.  TFA stores metadata about these events in the BDB for future reference.


TFA runs an auto-discovery and a file inventory periodically to keep up to date on databases and diagnostic files.  A file inventory is kept for every directory registered in the TFA configuration and metadata about those files is maintained in the BDB.  Examples of the metadata kept are file name, first timestamp, last timestamp, file type, etc.  Differing timestamp formats are also “normalized” as the format of timestamps can vary from one file type to the next.



Functionality Matrix



System Type

Comments,,,,,, latest version from Document 1513912.1 by default in $GI_HOME/tfa, $GI_HOME/tfa/bin/tfactl (tfactl -h for options) by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options) JUL2014 PSU upwardsRACInstalled by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options), Analytic functionality added (tfactl -analyze -h for options),,, latest version from Document 1513912.1 Server collection support only Bundle Patch (APR2014 PSU)ExadataStorage Server collection support added Bundle Patch (JUL2014 PSU)ExadataAnalytic functionality added (tfactl -analyze -h for options) by default in $GI_HOME/tfa,  $GI_HOME/bin/tfactl (tfactl -h for options), Database and Storage Server collections and analytic functionality



Support Tools Bundle


TFA includes tools commonly requested or recommended by Oracle Support as of version  The tfactl command interface is used to manage the tools.  Users can start, stop, status and deploy the tools in a standard way using tfactl commands and without needing to download and install them all separately as in the past.  Using the bundled tools from within TFA allows for a standard command interface, deployment locations and collection of output by TFA.


NOTE:  Oracle recommends that TFA Collector be kept at the latest version using downloads from My Oracle Support Document 1513912.1 which is updated frequently in order to make new features and other improvements available.




TFA  Collections


TFA only collects files that are relevant considering  the time of the problem. All the user has  to know is the approximate time of the problem  which can then be provided using a time modifier in the CLI.  TFA will then gather all the first failure  diagnostic files Oracle Support  needs in order to triage the problem. The  simplest and most thorough form of collection is  for the default time period which  is for the  last 4 hours

Example #1 :  $ tfactl diagcollect


Alternatively, if a more precise time is known, for  example a problem which occurred within the last hour

Example #2: $  tfactl diacollect -since 1h

The command shown in Example #1  above initiates a TFA run collecting all the relevant diagnostic data for an  issue that occurred within the last four hours.  This command will perform a configuration-wide  collection of all OS, Clusterware, ASM, Database, Cluster Health Monitor and OS  Watcher files that are relevant to that time, because no further  restriction such as a specific node or component are mentioned. In  addition, TFA will collect  other relevant files and configuration data such as patch inventories. Similarly the command shown in Example #2  would collect all the files modified within the last hour, most likely  resulting in a smaller and more targeted collection.


tfactl provides a series of command line options to  limit the collection to a subset of hosts or components.  However, if the root  cause is completely unknown, it might be best to simply collect as much data  during the relevant time as possible, so nothing  will be missed. Reducing  the data to collect is not as efficient as being precise about the time of the  incident. The more precisely the time of the problem can be  specified the smaller the collection will be. Likewise, the more specifically a  problem can be narrowed down the more targeted a  collection can be in terms of nodes and components.


The TFA inherent pruning of larger  files by skipping data from traces and logs that is outside of the time  specified is another approach to make data collection as  well as analysis more efficient. Again, the more precisely the time can be specified the more the files  can be pruned. The design goal of TFA is  to make the collections as complete, as relevant and as small as possible to  save time in collecting, copying, uploading and  transferring files for both the customer and for Oracle Support,  following a simple yet very efficient approach;  a file that was not modified around  the time of the incident is unlikely  to be relevant.


When diagnostic collections are taken the  first step taken automatically is  a "pre-collection" inventory in case any files have been  modified or created since the last periodic inventory. In the case of a pre-collection inventory only files for the specified databases and/or components are inventoried.
TFA can be  configured to take collections automatically when registered events  occur and store them in the repository. Examples  of registered events would be node evictions and ORA-00600 errors among others  documented in the TFA User Guide. There  is a built-in flood control mechanism for the  auto-collection feature to prevent a rash of duplicate errors or events from  triggering multiple duplicate collections. Should the repository reach its configurable maximum size TFA will  suspend taking collections until space is cleared in the repository using the purge  command.

Example #3: # tfactl  purge -older 30d

In example #3 any collections older than 30 days  will be deleted thereby freeing space for new collections. Note that the purge command must be run by  root or under sudo control.




TFA Analytics


TFA has built-in analytics  enabling  the user to  summarize the results from Database, ASM and  Clusterware alert logs, system message files and  OS Watcher performance metrics  across the configuration. Errors,  warnings and search patterns can be summarized for any given time period for  the entire configuration without needing to access the files  on each node of the configuration. In a  similar way, OS Watcher data can also be summarized for the  entire configuration for any given time period.



Further Information


For further information, please refer to


For any help and support with TFA, please come and chat to us via the RAC Community.

In this Blog:



September is Oracle Database Performance month

September is Oracle Database performance month for our blog series. Throughout the month, we will highlight some of the great performance tools we recommend for your database toolkit.


Why should you read this blog and revisit it during September ? Well, there are lost of reasons

  • Take the lead in your team to quickly and easily identify heavy loads or performance issues on your system and put action plans in place to improve the performance
  • Create your own performance toolkit
  • Recognise potential performance issues before they arise
  • Deal with performance issues confidently
  • Ensure all diagnostic data is collected for the relevant time frame when engaging Oracle Support


Go ahead, read on and find out for yourself !


You might also consider completing the Oracle Support Database Accreditation to increase your expertise with tools and best practices to support your Oracle Database. The accreditation also includes a focus on the performance tools we will highlight throughout September. What do the coolest database professionals have in their tool kits? Let’s find out.


For further details about Oracle Support Database Accreditation and to learn more about Performance Tuning and Diagnostics


Performance Toolkit.png


Database Performance Tools ADDM, AWR and ASH


Performance tuning and problem diagnosis are the two most challenging and important management tasks that any database administrator performs. Hence, let’s start our performance series with some of the tools that are delivered with the database:



The Oracle database automatically collects and stores workload information in the Automatic Workload Repository (AWR). AWR offers key information (e.g. wait classes, metrics, OS stats) and the repository is maintained automatically and html reports can be produced.


AWR can be used to identify:

  • SQLs or Modules with heavy loads or potential performance issues, symptoms of those heavy loads (e.g. logical I/O (buffer gets), Physical I/O, contention, waits)
  • SQLs that could be using sub-optimal execution plans (e.g. buffer gets, segment statistics), numbers of executions, parsing issues.
  • General performance issues, e.g. system capacity (I/O, memory, CPU), system/DB configuration. SGA (shared pool/buffer cache) and PGA sizing advice.



The Automatic Database Diagnostic Monitor (hereafter called ADDM) is an integral part of the Oracle RDBMS capable of gathering performance statistics and advising on changes to solve any existing performance issues measured. For this, ADDM uses the Automatic Workload Repository (AWR), a repository defined in the database to store database wide usage statistics at fixed size intervals (60 minutes).


To make use of ADDM, a PL/SQL interface called DBMS_ADVISOR has been implemented. This PL/SQL interface may be called directly through the supplied $ORACLE_HOME/rdbms/admin/addmrpt.sql script or used in combination with Oracle Enterprise Manager.




Active Session History (ASH) acquires the active session’s activity information by sampling it from the database kernel’s session state objects. The quantity of information sampled by ASH could be quite voluminous, so ASH maintains a fixed-size circular buffer allocated during database start-up time in the database System Global Area (SGA).


The ASH data is periodically flushed to disk and stored in the Automatic Workload Repository (AWR) which we covered in the previous blog. The information can be used for drilldown purposes during problem diagnosis or performance tuning. In addition to ADDM using the ASH to achieve its objectives, the ASH contents will also be displayed in the Oracle Enterprise Manager (EM) performance screen.



Please note that those 3 tools require licenses. As an alternative, you can use Stats Pack and SQL Trace/TKPROF.


Further information is available via



Support Tools SQLT and SQLHC


Oracle Support offers several support tools that can be downloaded for free from My Oracle Support. We want to focus in this update on two of those tools related to diagnosing and identifying performance issues: SQLT and SQLHC.


In short, SQLT, also known as SQLTXPLAIN, inputs one SQL statement and outputs a set of diagnostics files. These files are commonly used to diagnose SQL statements performing poorly.  SQLT connects to the database and collects execution plans, Cost-based Optimizer CBO statistics, schema objects metadata, performance statistics, configuration parameters, and similar elements that influence the performance of the SQL being analyzed.



SQLT provides pretty much everything that is needed to perform a SQL Tuning analysis and more. SQLT does not gather application data, but it does gather many pieces of metadata which, besides helping in the diagnostics phase of a SQL Tuning issue, may also allow the creation of a Test Case on a separate system for further investigation.



SQLT can be used if you want to perform the SQL Tuning effort yourself, or you may have been asked to use SQLT to provide the outputs to Oracle Support as part of a Service Request. You may even want to use it pro-actively since it presents an "Observations" section as part of the main report, which includes a comprehensive set of health-checks in and around the SQL statement that is being analyzed. This section can be considered as a "heads-up" and you may want to act on some of the observations and retry the execution of your SQL, especially if they involve CBO Statistics.



SQLT has several main execution methods:

  • XTRACT: Finds SQL in memory and/or AWR for analysis. This is the most used method to start an analysis. It requires the SQL_ID or the HASH_VALUE of the SQL statement in question, which can be found in SQL Trace file, an AWR or a StatsPack report.
  • XECUTE: Executes SQL from a text file to produce a set of diagnostics files
  • XTRXEC: Combines the features of XTRACT and XECUTE executing both methods serially
  • XPLAIN: based on the EXPLAIN PLAN FOR command and executed on SQL from a text file


The generated diagnostics output zip file has the name of, where NNNNN is a unique number with 5 digits. First, review the file named sqlt_sNNNNN_main.html. This file, together with a SQL Trace and its TKPROF, provides a clear picture of the SQL statement being analyzed. SQLT provides the following info for one SQL:

  • all know Execution Plans
  • CBO Schema Statistics
  • System Statistics
  • CBO Parameters

and a large list of other pieces of information that may be useful during the SQL Tuning analysis.



The SQL Tuning Health-Check Script  (SQLHC) is used to check the environment in which a single SQL Statement runs by checking Cost-based Optimizer (CBO) statistics, schema object metadata, configuration parameters and other elements that may influence the performance of the one SQL being analyzed. It does this while leaving "no database footprint" ensuring it can be run on all systems.


When executed for one SQL_ID, this script generates an HTML report with the results of a set of health-checks around the one SQL statement provided. You can find the SQL_ID of a statement from an AWR or ASH report or you can select it from the database using the V$SQL view.


Health-checks are performed over:

  • CBO Statistics for schema objects accessed by the one SQL statement being analyzed
  • CBO Parameters
  • CBO System Statistics
  • CBO Data Dictionary Statistics
  • CBO Fixed-objects Statistics


Have you had a chance to check out the documents below? Get moving and




SRDC - Service Request Data Collections


When investigating performance issues, it is important to start with the appropriate set of data. To that end, we have created a series of Oracle Service Request Data Collection (SRDC) articles to collect data for the most common performance issues.


Currently we have articles to collect diagnostics data for

  • slow database performance
  • hangs
  • slow SQL performance
  • errors (ORA-00600/ORA-07445/ORA-00054/ORA-00060/ORA-12751)
  • locking
  • contention for various wait events
  • SQL Plan Management
  • Database Testing: Capture and Replay

and more ...


SRDCs are particularly beneficial because they collect data targeted to a specific problem area and include all the details in one place. When you encounter an issue, the SRDC guides you what diagnostics are going to be required and hence you do not have to wait for a confirmation from a support engineer, which in return saves you time.

Also, since the diagnostics information is standardised, it can be pre-processed and important information is identified and allows to address your issue more quickly.


The following article provides the list of performance related SRDCs:

In addition to diagnostics, each SRDC also provides links to troubleshooting articles tailored to the issue at hand.


SRDCs are used to define what information to collect for various different problem types with the goal that support engineers have all relevant information to commence work on an issue straight away. Some issues only require a generic set of diagnostics whereas others use specifically targeted SRDCs to collect more detailed information.

In case the issue you are facing is not on the list, use the Database Performance Problem SRDC to collect standard performance information


Make some time and get accredited



We hope you enjoyed Oracle Database performance blog series.  Some of the key points for Database Performance best practice are

  • Agree on baseline - record an official acceptable performance service in the service level agreement
  • Find worst bottleneck- find/fix problem - evaluate performance - start again or is the baseline achieved?
  • Evaluate the performance tools from the Oracle Performance Toolkit and decide which ones are best suited for your environment - familiarize yourself with the tools before a performance issue arises
  • Prevention is always better than cure - even for your database environment !
  • Know how to collect the appropriate information when logging a SR

Last but not least

Updated: 15th December 2015





There have been some interesting announcements, updates and presentations during this year's OpenWorld 2015 in San Francisco. Check out the highlights and references outlined below related to Oracle Database.


Access to the starting point is via Oracle OpenWorld portal


Keynotes are available as full length replays and 2-3 minutes overview highlights from Oracle Openworld 2015: On Demand Videos

  • Integrated Cloud: Application and Platform Services - Larry Ellison
  • Vision 2025: Digital Transformation in the Cloud - Mark Hurd
  • Design Tech High School & Gender Diversity - Safra Catz
  • Oracle Software Innovations - Thomas Kurian
  • The Secure Cloud - Larry Ellison


Focus On Documents

Focus On Documents provide a roadmap to sessions categorised by products and product area. Further details are available in the


Content Catalog

The Content Catalog offers search capabilities by keywords for sessions, speakers or demos or by selecting predefined filters on the left side. Many of the presentations are now available to download from the Session Catalog site.

How to Download:

  • From the Oracle OpenWorld homepage, navigate to the Session Catalog site.
  • Filter or search to find your session of choice.
  • Once your search result appears, click the plus sign to expand info about the session. If the speaker has uploaded the presentation, you will see a link to download a PDF or PowerPoint presentation.

Support Sessions


Conference Id
Deploying a Private Cloud with the Oracle Cloud Platform: Customer Case StudyCON1980Join this session to learn how a large Dutch government organization implemented a platform-as-a-service (PaaS) solution on Oracle engineered systems including multiple Oracle Exadata, Oracle Exalogic, Oracle Database Appliance, and Oracle’s ZFS systems. In parallel, this organization reduced IT resource requirements by relying on Oracle for the operational management of the installed infrastructure and Oracle software (Oracle Database 12c and Oracle’s middleware solutions).
Deploying a Private Cloud with Oracle Engineered Systems: Lessons from the TrenchesCON2656The private cloud is becoming the preferred deployment model for mission-critical applications. This session provides business and technical insight into some of the pragmatic considerations, key operational issues, integration strategies, and deployment options to make the transition to cloud successful. Hear how enterprises can couple adoption of the cloud with DevOps to double up on the benefits. Also included are customer case studies and lessons learned from large deployments.
Best Practices: SQL Tuning Made Easier with SQLTXPLAINCON8662SQL tuning can be a daunting task. Too many things affect the cost-based optimizer (CBO) when you’re deciding on an execution plan. CBO statistics, parameters, bind variables, peeked values, histograms, and more are common contributors. The list of areas to analyze keeps growing. Oracle has been using SQLTXPLAIN (SQLT) as part of a systematic way to collect the information pertinent to a poorly performing SQL statement and its environment. With a consistent view of this environment, an expert on SQL tuning can perform more diligently, focusing on the analysis and less on the information gathering. This tool can also be used by experienced DBAs to make their life easier when it comes to SQL tuning. Learn more in this session.
Best Practices for Maintaining Your Oracle Real Application ClustersCON8663You chose Oracle Real Application Clusters (Oracle RAC) to help your organization deliver superior business results. Now learn how to further enhance the availability, scalability, and performance of Oracle RAC by staying on top of the latest success factors and best practices developed by Oracle RAC experts. In this session, Oracle experts discuss proven best practices to help you work more efficiently, upgrade more easily, and avoid unforeseen incidents. Topics include how to keep Oracle RAC in check and simplify diagnostic collection.
Oracle Database 12c Upgrade: Tools and Best Practices from Oracle SupportCON8664You've heard about Oracle Database 12c and its new capabilities. Now join this session to hear from Oracle experts about all the great tools and resources Oracle offers to help you upgrade to Oracle Database 12c efficiently and effectively. Session presenters from Oracle Support bring years of database experience and share recent lessons learned from Oracle Database 12c upgrades at companies of all sizes worldwide. You are sure to leave with valuable information that will help you plan and execute your upgrade. And most of the tools and resources discussed are available to current customers at no additional cost through their standard product support coverage.
Performance Analysis Using Automatic Workload Repository and Active Session HistoryCON8667The Automatic Workload Repository and Active Session History features in Oracle Enterprise Manager 12c are two powerful tools for performance analysis that can quickly identify and confirm performance problems with Oracle Database. Join us for this session to hear about real case studies from Oracle experts. Learn how these tools, along with the DB Time model, can help you avoid wasting time guessing the causes of your database performance issues. If you are not familiar with Automatic Workload Repository and Active Session History features, this is a don’t-miss session.
Best Practices for Supporting and Maintaining Oracle ExadataCON8670Join this session to discover best practices for maintaining and supporting Oracle Exadata. Experts from Oracle Support are joined by customer representatives from Intel, Sherwin-Williams, and Travelers to share successes and provide actionable recommendations to maximize system availability and drive operating efficiencies. Attendees hear how to take full advantage of Oracle Platinum Services, a special entitlement for qualifying engineered systems.
Best Practices for Maintaining and Supporting Oracle Enterprise ManagerCON8671In this session, learn about best practices, tips, and tools for maintaining and getting the most out of Oracle Enterprise Manager. Experts from Oracle Support offer knowledge gained from working with Oracle customers worldwide. They look at patching, upgrades, issue resolution, and more. Specific topics covered include Oracle Enterprise Manager metrics and health checks, remote diagnostics, communities, and how to receive priority service request handling.
Maximize Your Investment in the Oracle Cloud PlatformCON8716You chose the Oracle Cloud Platform to help your organization deliver superior business results. Now learn how to take full advantage of your Oracle Cloud service with all the great tools and resources you’re entitled to with your subscription. In this session, Oracle experts provide proven best practices to help you realize more value faster from your Oracle Cloud service. New users and cloud experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps.
Hybrid Cloud Best Practices: Migrating to Public Cloud Platform as a ServiceCON9495Explore the various scenarios in which you can leverage the public cloud to enhance business operations including hybrid development and testing, integration between platform as a service (PaaS) and on-premises applications, and integration between PaaS and software as a service (SaaS). This session discusses migration strategies from an on-premises database and Oracle WebLogic-based applications to public cloud database as a service (DBaaS) and Oracle Java Cloud Service.
Deploying Database as a Service in Private and Hybrid Cloud Models—Best PracticesCON9496Join this session to hear how to rapidly deploy private cloud database as a service (DBaas) for Oracle Database 12c and Oracle Database 11g, and get expert tips on how to avoid many of the barriers to a successful implementation. For a hybrid model, hear how to best leverage Oracle Public Cloud platform as a service (PaaS). Get the highlights on Oracle Enterprise Manager 12c migration features and Oracle Advanced Customer Support automation tools.
Power Up to Oracle Database 12c: Upgrade Paths for Maximum ValueCON9497Discover the groundbreaking innovations of multitenancy, in-memory performance, and the advanced security features of Oracle Database 12c and how it provides the ideal foundation for cloud readiness. This session covers successful examples of upgrade deployments along the cloud journey, and highlights the benefits of Oracle Database lifecycle management and security best practices.
Protect Your Most Valuable Assets: Up-Level Your Database Security PracticesCON9499Historically, data security has been focused on perimeter security solutions. Protecting the database itself is often overlooked, despite being the second-most vulnerable part of the IT environment. To ensure the safest environment and prepare for any auditing activity, additional layers of security are recommended. Join this session to learn how to further safeguard your Oracle Database regardless of where you may be in your lifecycle, and discover Oracle best practices for database review and hardening and ongoing secure monitoring to help you better respond to potential threats before they happen.
Get Under the Hood with Oracle Exadata MonitoringCON10169In this session, learn how to quickly set up complete monitoring for your Oracle Exadata Database Machine. Our subject matter expert and global technical lead in Oracle Enterprise Manager and Oracle Exadata support shares knowledge gained from working with customer deployments worldwide. Specific topics covered include common challenges, best practices, and new features to get your complete Oracle Exadata Database Machine stack monitored using Oracle Enterprise Manager Cloud Control.

Welcome to the My Oracle Support Community! We highly encourage you to personalize your community display name to make your activity more memorable. Please see for instructions.