Architecture Matters, Part 3: Next Steps—Optimizing for Database Performance, Security, and Availability

Version 9

    by Randal Sagrillo, Lawrence McIntosh, and Richard Friedman

     

    This article discusses how the latest Oracle-on-Oracle solutions optimize business-critical applications and database analytics.

     

    Table of Contents
    Introduction
    Optimizing for Performance
    Optimizing for Security
    Optimizing for High Availability
    HA Clustering
    Conclusion
    For More Information
    About the Authors

     

    Introduction

     

    When it comes to business-critical processes, architecture really does matter.

     

    Oracle applications, middleware, and database software technologies operate on a wide variety of non-Oracle hardware platforms. However, only running Oracle software on Oracle hardware takes advantage of Oracle's comprehensive applications-to-disk coengineering, end-to-end testing, and documented best practices. This is how enterprise customers can meet growing demands for advanced data security, higher efficiency, and better performance for their mixed online transaction processing (OLTP), data warehousing, and analytical workloads.

    OOS_Logo_small125.png
    Oracle Optimized Solutions provide tested and proven best practices for how to run software products on Oracle systems. Learn more.

     

    As we saw in the first article in this series ("Architecture Matters, Part 1: Assessing Application Migration Impact"), migrating an enterprise's business-critical databases to a new platform might seem daunting,  but following four easy steps to plan ahead will lead to success. In "Architecture Matters, Part 2: Navigating Database Replatforming—An Example, " we explored a real-world example of migrating an Oracle Database instance from IBM AIX to Oracle Solaris, demonstrating the simplicity and ease of operation made possible by using tools built into Oracle Database.

     

    Only migrating to an Oracle-on-Oracle infrastructure realizes the benefits of coengineered state-of-the-art products and technologies at each layer of the solution stack. And Oracle Optimized Solution for Secure Oracle Database gives the blueprints for migrating to optimized database configurations designed for security, high availability, and high performance that are proven and thoroughly tested.

     

    In this final article in the series, we take a look the benefits of migrating to the unique hardware and software features of an end-to-end Oracle Optimized Solution based on Oracle's latest SPARC servers, the Oracle Solaris operating system, and Oracle Database software.

     

    All the articles in this "Architecture Matters" series are located here:

     

     

    Optimizing for Performance

     

    Today's business climate demands organizations run deep analyses over their accumulating data in real time to produce the critical reports they need to stay ahead. However, most analytics algorithms cause a big hit on overall database and systems performance. To get around this, some organizations have been forced to suffer the increased cost and risk of deploying complex processes over multiple systems to generate these reports.

     

    The previous article demonstrated how to migrate an Oracle Database instance to Oracle Solaris 11 and Oracle's SPARC T5-4 platform. The next step is to not only improve the overall security of the system, but to also eliminate infrastructure crawl while making everything run faster. And to do that, all we need is to upgrade the Oracle Database instance to the latest SPARC servers running the latest version of Oracle Database 12c.

     

    Migrating to an Oracle Optimized Solution for Secure Oracle Database configuration on the latest SPARC servers, as shown in Figure 1, provides the extreme performance for mixed workloads that organizations need. How to do that is extremely simple—just follow the same approach shown in the previous article in this series, where we migrated to the SPARC T5-4 server, but this time migrate the Oracle Database instance to the SPARC T7-2 server!

     

    f1.png

    Figure 1: Moving to the SPARC T7-2 server.

     

    The next section explains why this strategy works.

     

    Oracle's SPARC M7 Processor—Key Technology Innovations

     

    At the center of Oracle's latest servers—the SPARC T7-1, T7-2, T7-4, M7-8, and M7-16 servers—is the SPARC M7 processor and its Software in Silicon features, which offload common database actions to special-purpose functions built into the hardware. Compared to other leading processors, the SPARC M7 processor delivers two to over four times greater OLTP database performance, and it is up to 11 times faster running reporting and analytics than alternatives without Software in Silicon technologies. Clearly, faster performance across mixed database, middleware, and application workloads gives a critical edge to higher productivity.

     

    Software in Silicon Technology

     

    More than just upping the processor clock speed, the SPARC M7 processor represents Oracle's continuing innovations to move in-memory database functions and hard-wired data protection directly onto the chip. The Software in Silicon enhancements in the SPARC M7 processor deliver optimized security as well as greater application throughput, through the following:

     

    • Each processor has eight Data Analytics Accelerator (DAX) coprocessors that offload decompression and analytics processing from the SPARC M7 cores, freeing those cores to process other pipeline instructions and workloads virtually without overhead. The DAX coprocessors place their results in the shared L3 cache for fast core access to provide a potential factor-of-ten gain in analytics performance. This very low-overhead interprocess communication permits DAX coprocessors on different processors to also exchange messages and access remote memory locations without CPU involvement.
    • Silicon Secured Memory real-time data integrity checking guards against the pointer-related software errors and vulnerabilities usually employed by malware. This fast and efficient in-silicon, on-chip monitoring allows applications to immediately identify unauthorized or erroneous memory accesses, diagnose the cause, and take appropriate recovery actions.
    • Hardware in-line decompression can fit up to three times more data in the same memory footprint without a performance penalty. Now, larger objects, such as entire database tables, can take advantage of processing with the Oracle Database In-Memory option.
    • Accelerated on-chip cryptography performs encryption and decryption operations at hardware rates, eliminating the performance and cost barriers typically associated with the high level of secure computing that is increasingly essential for all business applications.

     

    Oracle Database In-Memory Option and Analytics

     

    With the need to rely on real-time analytics to drive strategic business decisions, companies are running analytics and performing ad-hoc queries for reports on the same operational database systems they use for business logistics, financials, customer management, and human resources. Optimizing performance in such mixed workloads can be a difficult balancing act—OLTP workloads frequently perform operations on database rows (such as inserts or simple SQL fetches), while analytical queries execute more quickly on data in a column format.

     

    f2.png

    Figure 2: Analytic queries on OLTP databases perform column access instead of row access.

     

    Traditional databases are designed to support row-oriented OLTP transactions, which means that decision-support queries that execute on these databases tend to result in suboptimal performance.

     

    To support mixed workloads in a single database more efficiently, Oracle Database 12c offers an Oracle Database In-Memory option. With this option, the Software in Silicon features in the SPARC M7 processor–based servers improve real-time analytics performance as well as OLTP transactions. The eight on-chip DAX coprocessors offload database query processing and perform real-time data decompression.

     

    By constructing a unique dual-format architecture, the Oracle Database In-Memory option maintains data in the existing row format for OLTP, but also configures a new in-memory (IM) column store optimized for analytical reporting. In mixed workload environments, Oracle Database In-Memory demonstrates significant speedups for real-time analytical queries without any application changes. Also, the dual-format architecture does not increase the memory footprint two fold because of the compression ratios obtained on the placement of data into memory.

     

    f3.png

    Figure 3: The Oracle Database In-Memory option dual-format tables.

     

    The IM column format enables very fast "single instruction, multiple data" (SIMD) processing by which a DAX coprocessor can check 16 values in a single CPU cycle. Data is compressed using Oracle-only algorithms that optimize space and performance. Queries execute directly against the compressed data that is decompressed only when required.

     

    By offloading analytical query processing from SPARC M7 processor cores, the DAX coprocessors allow the main processor cores to execute other tasks, resulting in greatly improved overall performance for mixed workloads on the same system. The SPARC M7 processor is unique in this regard.

     

    f4.png

    Figure 4: DAX coprocessors free processors for other workloads.

     

    Using the In-Memory Option with Software in Silicon Technology

     

    Let's explore an approach that shows the benefit obtained by utilizing the Oracle Database In-Memory option along with the Software in Silicon technology provided through the SPARC M7 processor.  Here, we will utilize Swingbench's Sales History Database running on a SPARC T7-2 server.

     

    To test the performance gains when running the Oracle Database In-Memory option, engineers developed a sample analytical query that investigates product loyalty: how often does a customer buy the same product month-to-month based on sales patterns collected over a 13-year period? This SQL query (Figure 5) requires a number of full table scans and hash-join operations, which are characteristic of typical ad-hoc business queries.

     

    f5.png

    Figure 5: SQL plan for sample analytical query.

     

    Figure 6 shows the monitored SQL execution details (from the Oracle Enterprise Manager Database Express feature of Oracle Database 12c) for the baseline analytical query without the Oracle Database In-Memory option—the query executed in 109 seconds and performed three hash-join operations that resulted in full table scans. As Figure 6 shows, because the query operated on a database in storage—not from memory using the Oracle Database In-Memory option—the query results in I/O operations add to the overall duration. (Note that the storage used in this test was low-latency SAN storage from Oracle.) Without the Oracle Database In-Memory option, the query resulted in 1,444 I/O requests with a total I/O transfer of 1 GB.

     

    f6.png

    Figure 6: A sample analytical query took 109 seconds without the Oracle Database In-Memory option.

     

    Next, the Oracle Database In-Memory option was enabled and specific tables from the Sales database were loaded into the IM column store at database startup. (This allows queries to begin executing as soon as the database is available.) Figure 7 shows the monitored SQL execution for the sample query with the Oracle Database In-Memory option. For this particular query, execution time dropped from 109 seconds to 8 seconds—more than a ten-times improvement!

     

    f7.png

    Figure 7: Running the IM query with DAX coprocessors in the SPARC M7 processor reduced execution time to eight seconds.

     

    By default, Oracle Database 12c disables the automatic degree of parallelism (DOP) for statement queuing and in-memory parallel execution. Explicitly altering the initialization parameter PARALLEL_DEGREE_POLICY and setting it to AUTO (instead of MANUAL, the default) enables automatic DOP for the database session:

     

    SQL> alter session set PARALLEL_DEGREE_POLICY=AUTO;

     

    Enabling automatic DOP allows a transaction to work upon activities that can be done in parallel to one another and the IM query will take advantage of any available cores allocated to the LDom—along with any available DAX coprocessors—to work upon parallel data processing activities. Figure 8 shows the execution plan for the parallelized IM query session.

     

    f8.png

    Figure 8: SQL plan for sample analytical query when the DOP policy for the session is set to AUTO.

     

    The IM query (which previously took eight seconds) now takes only two seconds when parallelized (Figure 9).

     

    f9.png

    Figure 9: Setting automatic DOP for the database session reduces IM query duration to two seconds.

     

    Figure 9 lists several test runs for the sample IM query. For the two runs with PARALLEL_DEGREE_POLICY set to MANUAL, the query duration values were 9.0 and 8.0 seconds; database time values (on the right side of Figure 9) were 8.1 and 8.0 seconds, respectively. When the session was initialized for automatic DOP and the query was parallelized across multiple cores and DAX coprocessors, query duration was consistently two seconds and database time was approximately 30 seconds in each test case. Database times of 30 seconds represent the composite CPU time utilization—in other words, it is the combined database execution time for all cores executing the query. Because query duration was only two seconds, the 30-second time clearly illustrates that multiple cores and DAX coprocessors were available and participated in query execution.

     

    There are two ways in which engineers validated that DAX processing was indeed contributing to query acceleration. First, the following DTrace probe uses the fbt:dax provider to report DAX activity. Running this DTrace probe for the IM query without parallelization (PARALLEL_DEGREE_POLICY=MANUAL) gave these results:

     

    # dtrace -n 'fbt:dax::entry { @num[probefunc] = count(); }'

     

    dtrace: description 'fbt:dax::entry ' matched 69 probes

    ^C

      dax_ctx_alloc                                                     1

      dax_devmap                                                        1

      dax_devmap_access                                                 1

      dax_devmap_map                                                    1

      dax_hma_enable                                                    1

      dax_ioctl                                                         1

      dax_ioctl_ccb_thr_init                                            1

      dax_sfmmu_hash_add_state                                          1

      dax_sfmmu_hash_ent_add_state                                      1

      dax_sfmmu_hash_ent_create                                         1

      dax_state_add_sfmmu                                               1

      dax_state_add_thread                                              1

      dax_getinfo                                                       2

      dax_close                                                         3

      dax_minor_rele                                                    3

      dax_sfmmu_hash_remove_state                                       3

      dax_state_destroy                                                 3

      dax_minor_get                                                     4

      dax_open                                                          4

      dax_state_create                                                  4

      dax_ccb_search                                                  481

      dax_ccb_search_contig                                           481

      dax_hma_unload                                                  481

      dax_sfmmu_wait_matching_ccbs                                    481

      dax_thr_search_ccbs                                             481

      dax_ccb_contains_va                                             719

     

    After setting PARALLEL_DEGREE_POLICY=AUTO for the session, the number of DAX operations increased:

     

    # dtrace -n 'fbt:dax::entry { @num[probefunc] = count(); }'

     

    dtrace: description 'fbt:dax::entry ' matched 69 probes

    ^C

      dax_close                                                        27

      dax_minor_get                                                    27

      dax_minor_rele                                                   27

      dax_open                                                         27

      dax_sfmmu_hash_remove_state                                      27

      dax_state_create                                                 27

      dax_state_destroy                                                27

      dax_ccb_search                                                   28

      dax_ccb_search_contig                                            28

      dax_hma_unload                                                   28

      dax_sfmmu_wait_matching_ccbs                                     28

      dax_thr_search_ccbs                                              28

      dax_ccb_contains_va                                             551

      dax_ccb_buffer_decommit                                        1110

      dax_ccb_buffer_get_contig_block                                1110

      dax_ioctl                                                      1110

      dax_ioctl_ca_dequeue                                           1110

      dax_validate_ca_dequeue_args                                   1110

     

    In addition to DTrace, another way to observe DAX activity is with the busstat(1M) command provided in Oracle Solaris, which accesses bus-related performance counters in the system. The following excerpt of busstat output reports the number of DAX cache coherency buffer fetches with PARALLEL_DEGREE_POLICY=MANUAL:

     

    # busstat -a -w dax,pic0=DAX_SCH_ccb_fetch 5

     

    .

    .

    .

    25  dax114 DAX_SCH_ccb_fetch  0  DAX_QRY0_output_valid 0    DAX_QRY0_output_valid 0

    25  dax115 DAX_SCH_ccb_fetch  1  DAX_QRY0_output_valid 32   DAX_QRY0_output_valid 32

    25  dax116 DAX_SCH_ccb_fetch  0  DAX_QRY0_output_valid 0    DAX_QRY0_output_valid 0

    25  dax117 DAX_SCH_ccb_fetch  0  DAX_QRY0_output_valid 0    DAX_QRY0_output_valid 0

    .

    .

    .

     

    Running the same command with PARALLEL_DEGREE_POLICY=AUTO produces output that shows how all eight DAX coprocessors were engaged in the execution of the parallelized IM query:

     

    # busstat -a -w dax,pic0=DAX_SCH_ccb_fetch 5

     

    .

    .

    .

    30 dax38 DAX_SCH_ccb_fetch  0 DAX_QRY0_output_valid    0      DAX_QRY0_output_valid 0

    30 dax39 DAX_SCH_ccb_fetch  0 DAX_QRY0_output_valid    0      DAX_QRY0_output_valid 0

    30 dax40 DAX_SCH_ccb_fetch  2286 DAX_QRY0_output_valid 621776 DAX_QRY0_output_valid 621776

    30 dax41 DAX_SCH_ccb_fetch  2287 DAX_QRY0_output_valid 622800 DAX_QRY0_output_valid 622800

    30 dax42 DAX_SCH_ccb_fetch  2287 DAX_QRY0_output_valid 621360 DAX_QRY0_output_valid 621360

    30 dax43 DAX_SCH_ccb_fetch  2287 DAX_QRY0_output_valid 622640 DAX_QRY0_output_valid 622640

    30 dax44 DAX_SCH_ccb_fetch  2289 DAX_QRY0_output_valid 622448 DAX_QRY0_output_valid 622448

    30 dax45 DAX_SCH_ccb_fetch  2286 DAX_QRY0_output_valid 621664 DAX_QRY0_output_valid 621664

    30 dax46 DAX_SCH_ccb_fetch  2286 DAX_QRY0_output_valid 621712 DAX_QRY0_output_valid 621712

    30 dax47 DAX_SCH_ccb_fetch  2288 DAX_QRY0_output_valid 622176 DAX_QRY0_output_valid 622176

    30 dax48 DAX_SCH_ccb_fetch  0    DAX_QRY0_output_valid 0      DAX_QRY0_output_valid 0

    .

    .

    .

     

    When automatic DOP is set for a database session, available cores and DAX coprocessors are applied to parallelize an IM query. It is recommended that Oracle Database In-Memory workloads be evaluated with automatic parallelization as a part of standard testing protocols to determine the potential for efficiency gains.

     

    Compression Efficiency

     

    When investigating compression efficiency for the Oracle Database In-Memory option, the DBMS_COMPRESSION PL/SQL package that is available with both Oracle Database 11g Release 2 and Oracle Database 12c calculates estimates of a table's compressibility and row-level compression efficiencies for previously compressed tables. To learn how the Oracle Database In-Memory option conserves memory, the following SQL statement shows compression statistics for in-memory segments of a sample database (specifically the Sales database used in the previous analytical query example):

     

    SQL> SELECT v.owner, v.segment_name, v.bytes orig_size, v.inmemory_size in_mem_size, v.bytes / v.inmemory_size comp_ratio FROM v$im_segments v;

     

    OWNER

    --------------------------------------------------------------------------------

    SEGMENT_NAME

    --------------------------------------------------------------------------------

    ORIG_SIZE IN_MEM_SIZE COMP_RATIO

    ---------- ----------- ----------

    SH

    CUSTOMERS

    311427072   127205376 2.44822257

     

    SH

    TIMES

       1048576     1179648 .888888889

     

    OWNER

    --------------------------------------------------------------------------------

    SEGMENT_NAME

    --------------------------------------------------------------------------------

    ORIG_SIZE IN_MEM_SIZE COMP_RATIO

    ---------- ----------- ----------

    SH

    SALES

    1202716672   488898560 2.46005362

     

    The Oracle Database In-Memory option reduced the 1.2 GB Sales table in memory to 489 MB, almost two and a half times smaller. Compression efficiencies will vary, of course, depending on the actual table data.

     

    The In-Memory option is supported with Oracle Database 12c Release 1 (12.1.0.2) and requires Oracle Solaris 11.3 or later on SPARC M7 processor–based platforms. For more information about this option and its value for mixed workloads, see the white papers "Oracle Database In-Memory" and "When to Use Oracle Database In-Memory."

     

      

    Optimizing for Security

     

    Today's industry standards and regulations stipulate rigid requirements for business processes, practices, and IT system configurations. This is because so much is at risk for fraud, cyber break-ins, and malware.  Oracle integrated hardware and software countermeasures are engineered into the infrastructure stack to safeguard data, whether it be in motion, in process, or at rest. Here's how:

     

    • Silicon Secured Memory, implemented directly on the SPARC M7 processor chip, performs dynamic pointer checking to detect memory referencing errors, including bad pointers, invalid or stale references, and buffer overruns. This technology prevents silent data corruption and application failures that typically take a significant amount of time to find and correct (Figure 10). Silicon Secured Memory is utilized in application-specific memory allocators, such as in SGA memory allocation for Oracle Database 12c (12.1.0.2) client applications and in general-purpose memory allocators (such as malloc) in Oracle Solaris.

      f10.png

      Figure 10: SPARC M7 processor's Silicon Secured Memory technology.

       

    • Oracle Database security features, introduced in Oracle Database 12c, are advanced built-in security capabilities, such as conditional auditing, data redaction, real application security, privilege analysis, stronger application bypass controls, and new administrative roles.
    • The Oracle Solaris Cryptographic Framework provides a common set of algorithms and libraries that allow applications to access hardware-accelerated on-core cryptographic functions directly in the SPARC M7 processor.  These features are utilized to rapidly encrypt data at rest and in motion, and they are implemented in many Oracle products and middleware layers. On-chip encryption runs at high speeds, providing an efficient and effective way to protect sensitive data and reduce risk. The higher memory bandwidths of the SPARC M7 processor–based servers push these already fast encryption and decryption rates even higher.
    • Operating system and server isolation with Oracle Solaris Zones, Oracle VM Server for SPARC, and physical domains ensures that if a virtual server is compromised, system-level isolation prevents impacts to other servers and application processes.
    • The security compliance framework in Oracle Solaris 11 is based on the NIST standard Security Content Automation Protocol (SCAP), and it includes the compliance utility, which is an implementation of the OpenSCAP toolkit. The compliance utility automates security assessments and checks Oracle Solaris configurations against a defined security profile. Oracle Solaris provides several predefined compliance profiles that can also be customized to match site and security requirements. The compliance utility generates reports that detect noncompliance issues and suggests steps to remediate them. System administrators can easily define a service that runs compliance checks against a specific profile on a regular schedule. By capturing a proven compliant system image in an archive format for cloning (either a complete system or a zone), administrators can easily propagate secure and compliant server configurations.

     

    Oracle engineers have validated security-hardened Oracle Optimized Solution for Secure Oracle Database configurations by following best practices documented in Oracle Solaris STIGs for SPARC platforms and by using the Oracle Solaris compliance command to conduct security compliance checks. Innovations in the SPARC M7 processor working together with robust security features in software yield the most advanced security platform.

     

    Oracle provides a complete portfolio of security solutions to ensure data privacy, protect against insider threats, and enable regulatory compliance. The paper "Oracle Database 12c Security and Compliance" describes how Oracle technologies are engineered together to deliver a comprehensive and multilayered security model that covers a broad range of preventive, detective, and administrative features.

     

    Oracle also documents and publishes a number of recommended best practices for maintaining secure and compliant systems, database, and application configurations, and they are listed at the end of this article.

     

    Optimizing for High Availability

     

    Choosing the right hardware and software architectures also tends to avoid the kinds of mistakes that turn system failures into catastrophic and expensive situations when the system is down and not available.

     

    But choosing and implementing the right architecture that best fits an organization's high availability (HA) requirements can be intimidating. Oracle Optimized Solution for Secure Oracle Database defines a solution that combines clustered Oracle systems and storage with a highly reliable and secure operating system, server virtualization, clustering software, and database technologies.

     

    Support for mission-critical Oracle Database applications typically leverage failover redundancy in servers, hardware components, and networks, along with multipathing for networks and storage and RAID storage configurations.

     

    The advanced high-availability features in Oracle's SPARC M7 processor–servers provide hardware support that complements the HA and virtualization features built into the Oracle Solaris operating system and Oracle Database. Each layered combination of hardware and software enhances the overall reliability, availability, and serviceability (RAS) of a complex business-critical system. Virtualized server isolation, server cloning, and software clustering technologies add to the reliability and redundancy of the underlying hardware to ensure application availability in an optimally secure environment.  The next sections explain how.

     

    HA Features in SPARC M7 Processor–Based Servers

     

    SPARC M7 processor–based servers have built-in hardware features for high availability, including the following:

     

    • End-to-end data protection that detects and corrects errors throughout the system to ensure complete data integrity
    • Highly available networking and storage connectivity through redundant, hot-swappable power supply and fan units, and multiple I/O controller ASICs
    • Memory sparing in the SPARC M7 processor, allowing a DIMM to be dynamically deconfigured automatically when a DIMM is determined to be faulty, with no interruption of service
    • System telemetry and health diagnostics recorded by redundant, hot-swappable service processors in the SPARC M7 processor–based servers and forwarded to Oracle Enterprise Manager Ops Center for analysis and action

     

    HA Features in Oracle Solaris

     

    Reliability and availability features, including the following, have long been part of Oracle Solaris:

     

    • Fault Management Architecture (FMA) capabilities to automatically diagnose faults in the system and initiate self-healing actions to prevent service interruptions
    • Service Management Facility (SMF) to check that all dependencies have been met before restarting failed services, preventing restarts from failing
    • Oracle Solaris ZFS for fast cloning to support the resiliency of database environments as well as data integrity, capacity, performance, and manageability of storage
    • Oracle Solaris multipathing software options that provide redundant physical paths to I/O devices such as network interfaces and storage devices

     

    HA Clustering

     

    By clustering physical servers to eliminate single points of failure, application and web services can be quickly migrated away from failing nodes with minimal interruption.  Oracle Solaris Cluster detects node failures and provides fast application failover and reconfiguration.

     

    At the database layer, Oracle Real Application Clusters (Oracle RAC) utilizes a shared-cache clustered database architecture that overcomes the limitations of traditional architectures. Oracle RAC provides HA, scalability, and reliability and can be used for OLTP, data warehousing, and mixed workloads.

     

    Best Practices for Ensuring HA

     

    Oracle Optimized Solution for Secure Oracle Database embodies many best practices proven to increase database application availability. Oracle documents these best practices for deploying a highly available database environment in the Database High Availability Overview. Because availability requirements can vary greatly between different customers and applications, this document defines tiers of objectives that are commonly set in service-level agreements (SLAs), such as recovery time objectives (RTOs) and recovery point objectives (RPOs).

     

    Conclusion

     

    Choosing the right hardware and software when considering a migration of legacy business systems to Oracle Solaris can bring the significant benefits of higher performance, reduced cost and risk, maximum security and availability, and greater productivity. These are the advantages of running Oracle Applications and Oracle Database on the latest Oracle technologies.

     

    In the first article in this series, we saw that with the proper planning and assessment, customers can migrate with ease and confidence. And by using training resources available from Oracle University and by applying Oracle Optimized Solutions as an end-state architecture, customers have all the tools necessary to migrate and also upgrade their databases. Moreover, customers who elect not to perform a migration themselves have access to Oracle Migration Factory experts who can assist with their deep experience, tools, and methods.

     

    In the second article, we explored a real-world example of migrating an Oracle Database instance from IBM AIX to Oracle Solaris, highlighting the steps, effort, and benefits. That article also demonstrated the simplicity and ease of operation built into Oracle Database.

     

    Finally, by making use of end-to-end Oracle Optimized Solutions, customers can take advantage of the unique hardware and software features in a converged infrastructure that combines Oracle's SPARC M7 processor–based servers, Oracle Solaris, and Oracle Database software to achieve lower costs of ownership, reduced integration risk, better productivity, and shortened time to deployment. Oracle-only optimizations, such as the SPARC M7 processor's Software in Silicon technology and the Oracle Database In-Memory option, offer extreme performance for business analytics over mixed workloads, demonstrating that architecture does matter when it comes to complex business-critical processes.

     

    The migration to Oracle-on-Oracle technology is now easier than ever. Organizations can do it themselves, or they can get help from Oracle expert services through the Oracle Migration Factory to reduce risk and streamline the migration effort. Using proven Oracle Optimized Solutions practices and tools, databases can be migrated and upgraded quickly to immediately take advantage of new features in Oracle Database, Oracle Solaris, and the underlying hardware, all working together.

     

    To find out more about how to begin migration and receive the benefits of Oracle hardware and software engineered to work together, see oracle.com/aixtosolaris.

     

      

    For More Information

     

     

      

    About the Authors

     

    Randal Sagrillo is a solutions architect for Oracle. He has over 35 years of IT experience and is an expert in storage and systems performance, most recently applying this expertise to next-generation data center architectures, integrated systems, and Oracle engineered systems. Randal is also a frequent and well-attended speaker on database platform performance analysis and tuning and has spoken at several industry conferences including Oracle OpenWorld, Collaborate, Computer Measurements Group, and Storage Networking World. In his current role, he is responsible for solution architecture and development around Oracle Optimized Solutions. Before joining Oracle, Randal held a variety of leadership roles in product management, program management, and hardware/software product development engineering.

     

    Lawrence McIntosh is the chief architect within the Oracle Optimized Solutions team. He has designed and implemented highly optimized computing, networking, and storage technologies for both Sun Microsystems and Oracle. Lawrence has over 40 years of experience in the computer, network, and storage industries and has been a software developer and consultant in the commercial, government, education, and research sectors and an information systems professor. He has directly contributed to the product development phases of Oracle Exadata Database Machine and various Oracle Optimized Solution architectures. His most recent contribution has been in the design, development, testing, and deployment of Oracle Optimized Solution for Secure Oracle Database.

     

    Richard Friedman is a freelance writer developing technical collateral and documentation for software companies. Previously, Richard worked for many years at Sun Microsystems and Oracle as a senior technical writer specializing in documenting high-performance computing, compilers, and developer tools.

     

     

    Revision 1.0, 11/30/2015

     

    Follow us:
    Blog | Facebook | Twitter | YouTube