Configuring Storage for SAP Systems with Oracle Database on Oracle ZFS Storage Appliance

Version 2

    by Jan Brosowski, Ulrich Conrad, Hans-Juergen Denecke, Victor Galis, Gia-Khanh Nguyen, Pierre Reynes, and Xirui Yang

     

    If you are deploying SAP systems on Oracle Solaris, this article provides storage configuration and optimization recommendations for Oracle Database and SAP file systems that are NFS-mounted from an Oracle ZFS Storage Appliance system.

     

    Introduction

    This article explains optimizations, recommendations, and best practices gathered from testing done in engineering environments and customer implementations, and from related installation and configuration documentation. Sources include engineering deployments of Oracle Optimized Solution for SAP, Oracle Solution Center customer projects, the SAP product installation guide, and the SAP Community Network document "Recommended mount options for read-write directories," which is also available as My Oracle Support document 1567137.1. (Access to My Oracle Support documents requires logon and authentication. Please see the references at the end of this article for links to these documents, other sources, and useful reading.)

     

    To develop this article, Oracle engineers implemented an example Oracle Optimized Solution for SAP configuration to support typical enterprise resource planning (ERP) and SAP Enterprise Portal workloads, applying identified best practices and recommendations. This testing helped engineers validate the solution architecture and optimize the storage configuration.

    OOS_Logo_small125.png
    Oracle Optimized Solutions provide tested and proven best practices for how to run software products on Oracle systems. Learn more.

     

    Table of Contents

     

    About Oracle ZFS Storage Appliance

     

    The Oracle ZFS Storage Appliance family of products comprises hybrid storage systems powered by a multithreaded symmetric multiprocessing (SMP) operating system. These systems are based on an innovative cache-centric architecture that features massive dynamic random access memory (DRAM) and flash-based caches. This design supplies unprecedented caching performance for both reads and writes, eliminating the need for large numbers of hard disk drive spindles that are otherwise necessary for fast I/O operations in traditional network-attached storage (NAS) solutions. In addition, the appliances' design is flexible, scaling throughput, processor performance, and storage capacity easily as storage needs change.

     

    Oracle ZFS Storage Appliance systems offers several physical connectivity options and support multiple storage protocols. They can be connected to multiple heterogeneous physical connections such as Ethernet, InfiniBand, or Fibre Channel networks. Supported storage protocols include NFS (Network File System), SMB (Server Message Block), HTTP (HyperText Transfer Protocol), TFTP/SFTP/FTP (Trivial File, Secure or SSH File, and simple File Transfer Protocols), iSCSI LUNs (Internet Small Computer System Interface LUNs), NDMP (Network Data Management Protocol), SRP (Secure Remote Password), and native Fibre Channel LUNs.

     

    Oracle Optimized Solution for SAP incorporates an Oracle ZFS Storage Appliance system because it offers flexible, easy-to-manage, and highly available shared storage for both SAP and Oracle Database file systems, while at the same time delivering high performance for many Oracle Database and SAP instances.

     

    Oracle ZFS Storage Appliance systems offer cloud-converged storage, optimized for Oracle software. These systems can act as a native cloud gateway, seamlessly moving data from on-premises environments to Oracle Public Cloud. During testing for this Oracle Optimized Solution, an Oracle ZFS Storage ZS3-4 appliance was used to provide NFS shares for data access and iSCSI LUNs for Oracle Solaris Cluster quorum devices. The storage appliance was configured for on-premises storage only.

     

    For more information about Oracle ZFS Storage Appliance features and architecture, see the white paper "Architectural Overview of the Oracle ZFS Storage Appliance."

     

    File System Location and Sharing

     

    In an SAP deployment, not all files can or should be stored on a shared file system, and when they are, not all files should be stored in the same way. This section describes directories that must be on a local file system, those that can be stored on shared file systems, and those that must be stored on shared file systems. For file systems that are shared, there are different ways in which they can be shared: across multiple SAP systems, dedicated within an SAP system ID (SID), or dedicated to a specific Oracle Solaris Zone or zone cluster. Tables 1, 2, and 3 list typical file systems in an SAP and Oracle Database deployment on an Oracle ZFS Storage Appliance system, along with file sharing requirements and recommendations.

     

    Note: According to SAP Note 527843 for Oracle Real Application Clusters (Oracle RAC), the Oracle Grid Infrastructure feature of Oracle Database must be installed locally on each cluster node. (Access to SAP Notes requires logon and authentication to the SAP Marketplace.) The default local folder name for Oracle Grid Infrastructure in an SAP environment is /oracle/GRID. It is also recommended to put /oracle/oraInventory on the local file system. In the example test deployment of Oracle Optimized Solution for SAP, we put the directory /oracle on a local file system while creating separate shared file systems on an Oracle ZFS Storage Appliance system for each subfolder of /oracle/<SID>/*, along with a shared file system for /oracle/client.

     

    Table Legend
    √ Required/Recommended
    ο Possible

    Table 1. Sharing Requirements for SAP File Systems

     

    File SystemLocalCommon Share Across Multiple SAP SystemsDedicated Shared Within Each SAP SystemDedicated Shared Within an Oracle Solaris Zone Cluster (high availability [HA]) or Oracle Solaris Zone (non-HA)
    /usr/sap/hostctrl
    /sapmnt/<SID>
    /usr/sapοο
    /usr/sap/trans

     

    Table 2. Sharing Requirements for Oracle RAC File Systems

                              

    File SystemLocalDedicated Shared Within Each SAP SystemDedicated Shared Within an Oracle Solaris Zone Cluster
    /var/opt/oracle
    /usr/local/bin
    /oracleο
    /oracle/GRID
    /oracle/oraInventoryο
    /oracle/BASEο
    /oracle/stageο
    /oracle/client
    /oracle/<SID>ο
    /oracle/<SID>/<release>
    Voting Disks
    Oracle Cluster Registry
    /oracle/<SID>/*(except folders listed above)

     

    Table 3. Sharing Requirements for Failover of Single-Instance Oracle Database File Systems

                      

    File SystemLocalDedicated Shared Within Each SAP SystemDedicated Shared Within an Oracle Solaris Zone Cluster
    /var/opt/oracle
    /usr/local/bin
    /oracleο
    /oracle/oraInventoryο
    /oracle/stageο
    /oracle/client
    /oracle/<SID>ο
    /oracle/<SID>/<release>
    /oracle/<SID>/*(except folders listed above)

     

    Projects and File System Layout on Oracle ZFS Storage Appliance

     

    This section describes appliance projects and file systems, along with their respective configuration options, indicating how they were created in the example deployment of Oracle Optimized Solution for SAP. Several factors—including workload, data consistency, security, and storage capacity requirements—determine the layout of projects and file systems on the appliance.

     

    Workload Impact on Layout of Storage Pools and Projects

     

    In the example deployment, the Oracle ZFS Storage Appliance system was configured with two controller heads in active-active mode to support two SAP systems (PR1 and PP1). One storage pool was created per appliance head (pool1 on head 1 and pool2 on head 2), each using half the disk drives available (see the technical white paper "Secure Consolidation of SAP Systems on Highly Available Oracle Infrastructure" for more details).

     

    The layout was designed across the two clustered controller heads to accommodate the specific I/O requirements for the PR1 and PP1 SAP system workloads. In the example solution, the PR1 system is an ERP system with heavy database workloads using an Oracle RAC database, while the PP1 system is an SAP Enterprise Portal system using a single-instance Oracle Database configuration with failover (Figure 1).

     

    Note: For more information on the benefits of clustering Oracle ZFS Storage Appliance controller heads, see the technical white paper "Best Practices for Oracle ZFS Storage Appliance Clustered Controllers".

     

    f1.png

    Figure 1: SAP system configuration and storage pools in the example deployment.

     

    Based on workload requirements, it was decided to dedicate the first controller head exclusively to Oracle Database storage needs for SAP system PR1. The second controller head serves the rest of the storage needs for the infrastructure, namely Oracle Database for SAP system PP1, the binaries and SAP file systems for both SAP systems, iSCSI LUNs for clustering requirements, and boot and swap devices for the virtualized environments.

     

    After the storage pools are created, projects must be created on the appliance to define groups of shares with common administrative and storage settings. In addition to simplifying storage administration, projects are created in line with other important criteria, such as data consistency and security requirements.

     

    Project Layout Impact on Data Consistency

     

    Each project acts as a data consistency group for all the shares it contains. Therefore, we recommend creating projects for each individual database and its application. This provides a level of atomic consistency when performing snapshots or replication. When data transactions use multiple shares, integrity between the shares needs to be guaranteed for the transactions. When a project is defined for the database and associated SAP application, each share within the project is then replicated to the same target on the same schedule with the same options as the parent project, and each share is replicated in the same stream using the same project-level snapshots as other shares inheriting the project's configuration. This capability is important for applications that require data consistency among multiple shares, and influenced project and share placements.

     

    Project Layout Impact on Security

     

    Projects also define a common administrative control point for managing shares and enable authorized administrative access using a role-based access control (RBAC) security model. For more information on the appliance's administrative model, see the Oracle ZFS Storage Appliance Security Guide.

     

    SAP Storage Space Requirements

     

    Table 4 summarizes capacity recommendations for SAP storage on a per-instance basis. To support multiple instances, the recommended space requirements must be multiplied accordingly.

     

    Table 4. SAP Storage Space Requirements

     

    Application InstanceMinimum Storage Space Requirement (per Instance)
    SAP Central Services (SCS) instance2 GB
    Enqueue Replication Server (ERS) instance for the SCS, if required2 GB
    ABAP SAP Central Services (ASCS) instance2 GB
    Enqueue Replication Server (ERS) instance for the ASCS, if required2 GB
    Primary Application Server (PAS) instance5 GB
    Additional Application Server (AAS) instance5 GB

     

    Storage Layout

     

    Given the workload, data consistency, security, and storage capacity requirements discussed above, we created the storage pools, projects, and shares on the Oracle ZFS Storage Appliance system using the structure and options shown in Table 5. The storage layout outlined here should be modified according to your own implementation-specific workload and application requirements.

     

    Projects and File Systems for Pool1 on Head 1

     

    As mentioned previously, because of the database-intensive nature of the ERP workload, we decided to dedicate the first controller head of the Oracle ZFS Server Appliance system exclusively to Oracle Database file systems for SAP system PR1. Table 5 lists the file systems created in Pool1 for SAP system PR1 and the respective file system properties (record size, logbias value, primarycache value, and compression). (For descriptions of these options, see "Project Properties" in the Oracle ZFS Storage Appliance Administration Guide).

     

    Note: Depending on the workload profile, the record size of the sapdata* file systems may need to be adjusted, increasing the size if there are predominantly sequential read/write operations, or decreasing it if there are mostly random I/O operations.

     

    Table 5. Properties of Projects and File Systems Created in Pool1

                                                                                                

    HeadProjectFile SystemRecord Sizelogbias Valueprimarycache ValueCompression
    1PR1PR1 project (settings inherited as indicated)8 KBLatencyAll data and metadataOff
    12102128 KBLatencyAll data and metadataOff
    backup128 KBThroughputNo cacheLZJB
    cfgtoollogsInherited from project
    crs1Inherited from project
    crs2Inherited from project
    crs3Inherited from project
    mirrlogA128 KBLatencyNo cacheOff
    mirrlogB128 KBLatencyNo cacheOff
    oraarch128 KBThroughputNo cacheOff
    oracle-client128 KBLatencyAll data and metadataOff
    oraflash64 KBThroughputNo cacheOff
    origlogA128 KBLatencyNo cacheOff
    origlogB128 KBLatencyNo cacheOff
    saparch128 KBThroughputNo cacheLZJB
    sapbackup64 KBThroughputNo cacheOff
    sapcheck64 KBThroughputNo cacheOff
    sapdata18 KBThroughputAll data and metadataOff
    sapdata2128 KBThroughputNo cacheOff
    sapdata3Inherited from project
    sapdata48 KBThroughputAll data and metadataOff
    sapprof128 KBThroughputNo cacheOff
    sapreorg64 KBLatencyAll data and metadataOff
    saptraceInherited from project

     

    Table 6 lists the mount points for the file systems created in Pool1.

     

    Table 6. File Systems and Mount Points for Oracle RAC Files

                                                                        

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    1   PR1   12102/export/sapt58/PR1/12102/oracle/PR1/12102
    (Mounted in any zone where PR1 database will run)
    backup/export/sapt58/PR1/backupBackup folder for PR1 database
    cfgtoollogs/export/sapt58/PR1/cfgtoollogs/oracle/PR1/cfgtoollogs
    crs1/export/sapt58/PR1/crs1/oracle/PR1/crs1
    crs2/export/sapt58/PR1/crs2/oracle/PR1/crs2
    crs3/export/sapt58/PR1/crs3/oracle/PR1/crs3
    mirrlogA/export/sapt58/PR1/mirrlogA/oracle/PR1/mirrlogA
    mirrlogB/export/sapt58/PR1/mirrlogB/oracle/PR1/mirrlogB
    oraarch/export/sapt58/PR1/oraarch/oracle/PR1/oraarch
    oracle-client/export/sapt58/PR1/oracle-client/oracle/client
    (Mounted in any zone where PR1 database and SAP system PR1 application servers will run)
    oraflash/export/sapt58/PR1/oraflash/oracle/PR1/oraflash
    origlogA/export/sapt58/PR1/origlogA/oracle/PR1/origlogA
    origlogB/export/sapt58/PR1/origlogB/oracle/PR1/origlogB
    saparch/export/sapt58/PR1/saparch/oracle/PR1/saparch
    sapbackup/export/sapt58/PR1/sapbackup/oracle/PR1/sapbackup
    sapcheck/export/sapt58/PR1/sapcheck/oracle/PR1/sapcheck
    sapdata1/export/sapt58/PR1/sapdata1/oracle/PR1/sapdata1
    sapdata2/export/sapt58/PR1/sapdata2/oracle/PR1/sapdata2
    sapdata3/export/sapt58/PR1/sapdata3/oracle/PR1/sapdata3
    sapdata4/export/sapt58/PR1/sapdata4/oracle/PR1/sapdata4
    sapprof/export/sapt58/PR1/sapprof/oracle/PR1/sapprof
    sapreorg/export/sapt58/PR1/sapreorg/oracle/PR1/sapreorg
    saptrace/export/sapt58/PR1/saptrace/oracle/PR1/saptrace

     

    Projects and File Systems for Pool2 on Head 2

     

    Pool2 on the second controller head serves the rest of the storage needs for the infrastructure: the file systems for the PP1 database, the binaries and SAP file systems for both SAP systems, iSCSI LUNs for clustering, and boot and swap devices for all virtual servers. Table 7 lists the file systems created in Pool2 and their properties (record size, logbias value, primarycache value, and compression value).

     

    Note: Depending on the workload profile, the record size of the sapdata* may have to be adjusted, increasing the size if there are predominantly sequential read/write operations, or decreasing it if there are mostly random I/O operations.

     

    Table 7. Properties of Projects and File Systems Created in Pool2

                                                                                                                                                      

    HeadProjectFile SystemRecord Sizelogbias Valueprimarycache ValueCompression
    2TRANS128 KBThroughputNo cacheLZJB
    usr-sap-transInherited from project
    SOFTWARE128 KBThroughputNo cacheLZJB
    softwareInherited from project
    PR1APP128 KBThroughputNo cacheOff
    sapmntInherited from project
    usr-sap-aas-01Inherited from project
    usr-sap-aas-02Inherited from project
    usr-sap-ascsInherited from project
    usr-sap-haappsInherited from project
    PP1128 KBThroughputAll data and metadataOff
    11204128 KBLatencyAll data and metadataOff
    backup128 KBThroughputNo cacheLZJB
    cfgtoollogsInherited from project
    mirrlogA128 KBLatencyNo cacheOff
    mirrlogB128 KBLatencyNo cacheOff
    oraarchInherited from project
    oracle128 KBLatencyAll data and metadataOff
    oracle-client128 KBLatencyAll data and metadataOff
    oraflash64 KBThroughputNo cacheOff
    origlogA128 KBLatencyNo cacheOff
    origlogB128 KBLatencyNo cacheOff
    saparch128 KBThroughputNo cacheLZJB
    sapbackup64 KBThroughputNo cacheOff
    sapcheck64 KBThroughputNo cacheOff
    sapdata18 KBThroughputAll data and metadataOff
    sapdata2Inherited from project
    sapdata38 KBThroughputNo cacheOff
    sapdata48 KBThroughputAll data and metadataOff
    sapmntInherited from project
    sapprof128 KBThroughputNo cacheOff
    sapreorg64 KBLatencyAll data and metadataOff
    saptraceInherited from project
    stage64 KBThroughputNo cacheOff
    usr-sap-aas-01Inherited from project
    usr-sap-aas-02Inherited from project
    usr-sap-scsInherited from project
    usr-sap-haappsInherited from project

     

    Table 8 through Table 12 list the mount points for each file system created in Pool2. Because Oracle Database instances and SAP system PR1 and PP1 instances are running in different zone clusters, several NFS mount points must be defined as mount points on the corresponding servers (that is, on specific zones and zone clusters).

     

    Table 8. File Systems and Mount Points for Binaries for Single-Instance Oracle Database Configuration

     

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    2PP1oracle/export/sapt58/PP1/oracle/oracle
    (Mounted in any zone where PP1 database will run)
    oracle-client/export/sapt58/PP1/oracle-client/oracle/client
    (Mounted in any zone where PP1 database and SAP system PP1 application servers will run)
    11204/export/sapt58/PP1/11204/oracle/PP1/11204
    (Mounted in any zone where PP1 database will run)
    backup/export/sapt58/PP1/backupBackup folder for PP1 database

     

    Table 9. File Systems and Mount Point for Files for Single-Instance Oracle Database Configuration

     

                                  

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    2PP1cfgtoollogs/export/sapt58/PP1/cfgtoollogs/oracle/PP1/cfgtoollogs
    mirrlogA/export/sapt58/PP1/mirrlogA/oracle/PP1/mirrlogA
    mirrlogB/export/sapt58/PP1/mirrlogB/oracle/PP1/mirrlogB
    oraarch/export/sapt58/PP1/oraarch/oracle/PP1/oraarch
    oraflash/export/sapt58/PP1/oraflash/oracle/PP1/oraflash
    origlogA/export/sapt58/PP1/origlogA/oracle/PP1/origlogA
    origlogB/export/sapt58/PP1/origlogB/oracle/PP1/origlogB
    saparch/export/sapt58/PP1/saparch/oracle/PP1/saparch
    sapbackup/export/sapt58/PP1/sapbackup/oracle/PP1/sapbackup
    sapcheck/export/sapt58/PP1/sapcheck/oracle/PP1/sapcheck
    sapdata1/export/sapt58/PP1/sapdata1/oracle/PP1/sapdata1
    sapdata2/export/sapt58/PP1/sapdata2/oracle/PP1/sapdata2
    sapdata3/export/sapt58/PP1/sapdata3/oracle/PP1/sapdata3
    sapdata4/export/sapt58/PP1/sapdata4/oracle/PP1/sapdata4
    sapprof/export/sapt58/PP1/sapprof/oracle/PP1/sapprof
    sapreorg/export/sapt58/PP1/sapreorg/oracle/PP1/sapreorg
    saptrace/export/sapt58/PP1/saptrace/oracle/PP1/saptrace

     

    Table 10. File Systems and Mount Points for SAP System PR1

     

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    2PR1APPusr-sap-ascs/export/sapt58/PR1APP/usr-sap-ascs/usr/sap
    (Mounted in zone clusters epr1-ascs-01 and epr1-ascs-02)
    usr-sap-haapps/export/sapt58/PR1APP/usr-sap-haapps/usr/sap
    (Mounted in zone clusters epr1-haapps-01 and epr1-haapps-02)
    usr-sap-aas-01/export/sapt58/PR1APP/usr-sap-aas-01/usr/sap
    (Mounted in the zone for SAP additional application servers without HA in Node 1)
    usr-sap-aas-02/export/sapt58/PR1APP/usr-sap-aas-02/usr/sap
    (Mounted in the zone for SAP additional application servers without HA in Node 2)
    sapmnt/export/sapt58/PR1APP/sapmnt/sapmnt/PR1
    (Mounted in any zone where SAP system PR1 and PR1 database will run)

     

    Table 11. File Systems and Mount Points for SAP System PP1

     

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    2PP1usr-sap-scs/export/sapt58/PP1/usr-sap-scs/usr/sap
    (Mounted in zone clusters epp1-scs-01 and epp1-scs-02)
    usr-sap-haapps/export/sapt58/PP1/usr-sap-haapps/usr/sap
    (Mounted in zone clusters epp1-haapps-01 and epp1-haapps-02)
    sapmnt/export/sapt58/PP1/sapmnt/sapmnt/PP1
    (Mounted in any zone where SAP system PP1 and PP1 database will run)

     

    Table 12. File Systems and Mount Points Common to SAP Systems PR1 and PP1

     

    HeadProjectFile SystemOracle ZFS Storage Appliance Mount PointNFS Mount Point
    2TRANSusr-sap-trans/export/sapt58/TRANS/usr-sap-trans/usr/sap/trans
    (Mounted in any zone where SAP systems PR1 and PP1 will run)

     

    Enabling Jumbo Frames

     

    The maximum transmission unit (MTU) size is the amount of data communicated in a single Ethernet network packet; by default, the MTU size is 1,500 bytes. By increasing the MTU size to 9,000 bytes (enabling jumbo frames), each packet carries more data with less protocol overhead, resulting in greater network efficiency, lower I/O latency, and higher throughput. When deploying Oracle Optimized Solution for SAP, the use of jumbo frames is recommended for the private network that interconnects shared storage to the SAP and Oracle Database instances. Note that you must configure jumbo frame support on both the server and storage sides of the network, as well as on any network device in between, physical or virtual.

     

    Enabling Support for Jumbo Frames on the Server

     

    This command configures the MTU size in the primary domain on the server during virtual switch configuration. VLAN 105 is used for the storage network.

     

    root@sapt58-ctrl-01:~# ldm add-vsw net-dev=net5 mtu=9000 vid=102,103,105 pri-priv-vsw primary

    Listing 1: Configuring jumbo frames on the server.

     

    For more information, see "How to Configure Virtual Network and Virtual Switch Devices to Use Jumbo Frames" in the Oracle VM Server for SPARC 3.4 Administration Guide. If the environment is not virtualized using Oracle VM Server for SPARC, see "Enabling Support for Jumbo Frames" in the Oracle Solaris guide, Configuring and Administering Network Components in Oracle Solaris 11.2.

     

    Enabling Support for Jumbo Frames on the Storage

     

    Once the MTU size is set on the server side, it is necessary to configure the network datalinks on the Oracle ZFS Storage Appliance system. To create the datalinks using the browser interface (Figure 2) on the appliance, navigate to Configuration -> Network and click the + icon next to Datalinks. Enter the datalink name as Storage Datalink 1, and set the MTU size to 9,000 bytes to enable jumbo frames. Select the first device, ixgbe1, and click Apply.

     

    f2.png

    Figure 2: Configuring jumbo frames on device ixgbe1.

     

    Repeat the process for device ixgbe3. Click the + icon next to Datalinks, enter the datalink name as Storage Datalink 3, and set the MTU size to 9,000 bytes to enable jumbo frames. Select device ixgbe3 and click Apply.

     

    f3.png

    Figure 3:  Configuring jumbo frames on device ixgbe3.

     

    NFS Mount Options

     

    This article focuses on SAP systems deployed on Oracle Solaris with NFS-mounted file systems on NAS storage, specifically file systems on an Oracle ZFS Storage Appliance system. The NFS mount option recommendations given here are for SAP and Oracle Database file systems that reside on the appliance. As with other best practices and recommendations in this article, the parameters and options listed here are based on internal engineering deployments, customer implementations, and related documentation.

     

    Figure 4 depicts virtual servers in the example deployment that mount SAP and Oracle Database file systems. These virtual servers are constructed using Oracle's built-in virtualization technologies: Oracle VM Server for SPARC logical domains (LDOMs) and Oracle Solaris Zones. To configure some servers for HA, certain zones are clustered using the Oracle Solaris Cluster software. This section lists example /etc/vfstab entries used to mount file systems for the virtual servers. Figure 4 shows the naming conventions for the domains, zones, and zone clusters that comprise the virtual servers.

     

    f4.png

    Figure 4:  Virtualization layout for example deployment of Oracle Optimized Solution for SAP.

     

     

    In a typical SAP environment, SAP application servers access the following shared file systems:

     

    • /sapmnt is the base directory for the SAP system. It contains executable kernel programs, log files, and profiles.
    • /usr/sap contains files for the operation of SAP application instances. It should be mounted in any zone running SAP instances.
    • /usr/sap/trans is the global transport directory for all SAP systems. It contains data and log files for objects transported between SAP systems. This transport directory needs to be accessible by at least one application server instance of each SAP system, but preferably by all instances.

     

    Table 13 shows mount option recommendations for SAP file systems. Note that in different zone clusters, the file system /usr/sap might be different shares. In the example deployment of Oracle Optimized Solution for SAP, there are two different /usr/sap shares: one for each SAP system, PR1 and PP1. As shown in Table 13, in SAP system PR1, the first /usr/sap share is used by the ASCS and Enqueue Replication Server (ERS) instances and must be mounted from the zones running ASCS and ERS (these zones are named epr1-ascs-01 and epr1-ascs-02). The second /usr/sap share is for the PAS and AAS instances and must be mounted from the zones running PAS and AAS (epr1-haapps-01 and epr1-haapps-02). This means that the appropriate /usr/sap share should be mounted only in the corresponding zone.

     

    Table 13. Recommended Mount Options for SAP File Systems

     

    File SystemOracle Solaris ZoneMount Options
    /sapmnt/<SID>Every SAP zonerw,bg,hard,[intr],rsize=1048576,wsize=1048576,proto=tcp,vers=3
    Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp
    /usr/sap
    (for ASCS and ERS instances)
    Every zone where the corresponding SAP instance(s) might runrw,bg,hard,[intr],rsize=1048576,wsize=1048576,proto=tcp,vers=3
    /usr/sap
    (for PAS and AAS instances)
    Every zone where the corresponding SAP instance(s) might runrw,bg,hard,[intr],rsize=1048576,wsize=1048576,proto=tcp,vers=3
    /usr/sap/transEvery SAP zonerw,bg,hard,[intr],rsize=1048576,wsize=1048576,proto=tcp,vers=3

     

    Important: The file system /sapmnt/<SID> must be mounted in every zone where the corresponding SAP instances will run. Because of the BR*Tools, /sapmnt/<SID> also must be mounted in the zones where Oracle Database will run.

     

    The SAP file systems under Oracle Solaris Cluster control must have the mount option intr (instead of nointr) to allow signals to interrupt file operations on these mount points. Using the nointr option instead would prevent the sapstartsrv process from being successfully stopped in the event that a file system server is unresponsive.

     

    Examples of /etc/vfstab entries in Application Domains

     

    In the example deployment depicted in Figure 4, there are two application domains: one for mission-critical SAP components and application servers (HA APP) and another for SAP application servers that do not require high availability (APP). This section gives examples of /etc/vfstab entries for the zones and zone clusters in these two application domains.

     

    Note that in the zone clusters, except for the /export/software file system, the mount at boot option is set to no because the file systems are managed as Oracle Solaris Cluster storage resources. In the zones without HA (in which Oracle Solaris Cluster resources are not used), the file systems are configured with the mount at boot option set to yes.

     

    HA APP Domain (sapt58-haapp-0x)

     

    Listing 2 through Listing 4 give the /etc/vfstab entries used to mount SAP and Oracle Database file systems in virtual servers in the HA APP domain.

     

    root@sapt58-haapp-01:~# cat /etc/vfstab | grep izfssa
      izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    root@sapt58-haapp-01:~#  

    Listing 2: Example /etc/vfstab entry for global zone sapt58-haapp-01.

     

    root@sapt58-haapp-01:~# cat /zones/pr1-haapps-zc/root/etc/vfstab | grep izfssa 
    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    izfssa-01:/export/sapt58/PR1/sapmnt         -  /sapmnt/PR1       nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    izfssa-01:/export/sapt58/PR1/usr-sap        -  /usr/sap          nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    izfssa-02:/export/sapt58/TRANS/trans        -  /usr/sap/trans    nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    izfssa-02:/export/sapt58/ORACLE/oracle      -  /oracle           nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3 
    root@sapt58-haapp-01:~#  

    Listing 3: Example /etc/vfstab entry for zone cluster pr1-haapps-zc.

     

    root@sapt58-haapp-01:~# cat /zones/pr1-ascs-zc/root/etc/vfstab | grep izfssa
    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/sapmnt         -  /sapmnt/PR1       nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/usr-sap        -  /usr/sap          nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-02:/export/sapt58/TRANS/trans        -  /usr/sap/trans    nfs  -  no   
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    root@sapt58-haapp-01:~#  

    Listing 4: Example /etc/vfstab entry for zone cluster pr1-ascs-zc.

     

    APP Domain (sapt58-app-0x)

     

    Listing 5 and Listing 6 show example /etc/vfstab entries that mount SAP and Oracle Database file systems in zones in the APP domain. Because these zones support non-HA applications, Oracle Solaris Cluster is not used to manage zone resources. As a result, the file systems are mounted with the mount at boot option set to yes.

     

    root@sapt58-app-01:~# cat /etc/vfstab | grep izfssa
    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    root@sapt58-app-01:~#  

    Listing 5: Example /etc/vfstab entry for global zone sapt58-app-01.

     

    root@sapt58-app-01:~# cat /zones/pr1-aas-01/root/etc/vfstab | grep izfssa
    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/sapmnt         -  /sapmnt/PR1       nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/usr-sap        -  /usr/sap          nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-02:/export/sapt58/TRANS/trans        -  /usr/sap/trans    nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-02:/export/sapt58/ORACLE/oracle      -  /oracle           nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    root@sapt58-app-01:~# 

    Listing 6: Example /etc/vfstab entry for zones pr1-aas-01 and pr1-aas-02.

     

    NFS Mount Options for Oracle Database File Systems

     

    This section describes the recommended NFS mount options for Oracle Database file systems that support SAP systems on Oracle Solaris and are NFS-mounted from an Oracle ZFS Storage Appliance system.

     

    As described in this article, the example deployment uses Oracle Database file systems based on recommendations in the SAP installation guide, SAP Note 527843, and test implementations for Oracle Optimized Solution for SAP. (Access to SAP Notes requires logon and authentication to the SAP Marketplace.) The recommended mount options for Oracle Database file systems originate from My Oracle Support document 1567137.1. (Access to My Oracle Support documents requires logon and authentication to My Oracle Support.)

     

    Note: For optimal performance, the option forcedirectio should be set on all file systems that contain Oracle Database data files. It should not be used on file systems that contain binaries.

     

    Table 14 through Table 16 show mount option recommendations for Oracle Database file systems.

     

    Table 14. Mount Options for Single-Instance Database Configuration

     

    File SystemMount Options
    Data filesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    Binariesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,suid

     

    Table 15. Mount Options for Oracle Database with Oracle RAC

     

    File SystemMount Options
    Data filesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    Binariesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid

     

    Table 16. Mount Options for the Cluster Ready Services Voting Disk or Oracle Cluster Registry Components or Oracle Clusterware

     

    File SystemMount Options
    Voting Disk or Oracle Cluster Registryrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,forcedirectio

     

    Note: For Oracle Solaris, the default NFS mount options include suid, so unless nosuid is explicitly set, suid would still be implicitly set.

     

    Examples of /etc/vfstab Entries in the Database Domain

     

    Listing 7 and Listing 8 show example /etc/vfstab entries used to mount Oracle Database file systems in the global zone and zone cluster associated with the database domain. Note that in the zone cluster for Oracle Database, the mount at boot option is set to no (except for the /export/software file system) because the file systems for Oracle Database are managed as Oracle Solaris Cluster storage resources.

     

    root@sapt58-db-01:~# cat /etc/vfstab | grep izfssa
    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  
      rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    root@sapt58-db-01:~#  

    Listing 7: Example /etc/vfstab entry for global zone sapt58-db-01.

     

    izfssa-02:/export/sapt58/SOFTWARE/software  -  /export/software  nfs  -  yes  rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-02:/export/sapt58/ORACLE/pr1     -  /oracle/PR1/121               nfs  -  no  rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/stage         -  /oracle/stage         nfs  -  no  rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3
    izfssa-01:/export/sapt58/PR1/crs1   -  /oracle/PR1/crs1      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/crs2   -  /oracle/PR1/crs2      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/crs3   -  /oracle/PR1/crs3      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/origlogA   -  /oracle/PR1/origlogA      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/origlogB   -  /oracle/PR1/origlogB      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/mirrlogA   -  /oracle/PR1/mirrlogA     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/mirrlogB   -  /oracle/PR1/mirrlogB     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/oraarch    -  /oracle/PR1/oraarch      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/oraflash   -  /oracle/PR1/oraflash     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapbackup  -  /oracle/PR1/sapbackup    nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,sui
    izfssa-01:/export/sapt58/PR1/sapcheck   -  /oracle/PR1/sapcheck     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapdata1   -  /oracle/PR1/sapdata1     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    \izfssa-01:/export/sapt58/PR1/sapdata2   -  /oracle/PR1/sapdata2     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapdata3   -  /oracle/PR1/sapdata3     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapdata4   -  /oracle/PR1/sapdata4     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapdata5   -  /oracle/PR1/sapdata5     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/sapreorg   -  /oracle/PR1/sapreorg     nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid
    izfssa-01:/export/sapt58/PR1/saparch    -  /oracle/PR1/saparch      nfs  -  no  rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,noac,forcedirectio,vers=3,suid

    Listing 8: Example /etc/vfstab entries for zone cluster pr1-db-zc.

     

    Creating and Mounting File Systems in Database Zones

     

    Listing 9 and Listing 10 show how to create and mount nested file systems in the database zones pr1-db-01 and pr1-db-02, respectively.

     

    root@epr1-db-01:~# mkdir -p /oracle/PR1/121 /oracle/GRID /oracle/oraInventory /oracle/stage \
    /oracle/PR1/crs1 /oracle/PR1/crs2 /oracle/PR1/crs3 /oracle/PR1/origlogA /oracle/PR1/origlogB \
    /oracle/PR1/mirrlogA /oracle/PR1/mirrlogB /oracle/PR1/oraarch /oracle/PR1/oraflash \
    /oracle/PR1/sapbackup /oracle/PR1/sapcheck /oracle/PR1/sapdata1 /oracle/PR1/sapdata2 \
    /oracle/PR1/sapdata3 /oracle/PR1/sapdata4 /oracle/PR1/sapdata5 /oracle/PR1/sapreorg \
    /oracle/PR1/saparch
    root@epr1-db-01:~#
    root@epr1-db-01:~# for fs in /oracle/PR1/121 /oracle/stage /oracle/PR1/crs1 /oracle/PR1/crs2 \
    /oracle/PR1/crs3 /oracle/PR1/origlogA /oracle/PR1/origlogB /oracle/PR1/mirrlogA \
    /oracle/PR1/mirrlogB /oracle/PR1/oraarch /oracle/PR1/oraflash /oracle/PR1/sapbackup \
    /oracle/PR1/sapcheck /oracle/PR1/sapdata1 /oracle/PR1/sapdata2 /oracle/PR1/sapdata3 \
    /oracle/PR1/sapdata4 /oracle/PR1/sapdata5 /oracle/PR1/sapreorg /oracle/PR1/saparch; do set -x; \
    mount $fs; set +x; done

    Listing 9: Creating mount points and mounting file systems in database zone pr1-db-01.

     

    root@epr1-db-02:~# mkdir -p /oracle/PR1/121 /oracle/GRID /oracle/oraInventory /oracle/stage \
    /oracle/PR1/crs1 /oracle/PR1/crs2 /oracle/PR1/crs3 /oracle/PR1/origlogA /oracle/PR1/origlogB \
    /oracle/PR1/mirrlogA /oracle/PR1/mirrlogB /oracle/PR1/oraarch /oracle/PR1/oraflash \
    /oracle/PR1/sapbackup /oracle/PR1/sapcheck /oracle/PR1/sapdata1 /oracle/PR1/sapdata2 \
    /oracle/PR1/sapdata3 /oracle/PR1/sapdata4 /oracle/PR1/sapdata5 /oracle/PR1/sapreorg \
    /oracle/PR1/saparch
    root@epr1-db-02:~#
    root@epr1-db-02:~# for fs in /oracle/PR1/121 /oracle/stage /oracle/PR1/crs1 /oracle/PR1/crs2 \
    /oracle/PR1/crs3 /oracle/PR1/origlogA /oracle/PR1/origlogB /oracle/PR1/mirrlogA \
    /oracle/PR1/mirrlogB /oracle/PR1/oraarch /oracle/PR1/oraflash /oracle/PR1/sapbackup \
    /oracle/PR1/sapcheck /oracle/PR1/sapdata1 /oracle/PR1/sapdata2 /oracle/PR1/sapdata3 \
    /oracle/PR1/sapdata4 /oracle/PR1/sapdata5 /oracle/PR1/sapreorg /oracle/PR1/saparch; do set -x; \
    mount $fs; set +x; done

    Listing 10: Creating mount points and mounting file systems in database zone pr1-db-02.

     

    Mount Options for Oracle RMAN

     

    Table 17 shows the recommended mount options for Oracle RMAN. For Oracle RMAN backup sets, image copies, and Data Pump (a feature of Oracle Database) dump files, the NOAC (No Attribute Cache) mount option should not be specified. Oracle RMAN and Data Pump do not check this option, and specifying it can adversely affect performance. When the NOAC option is set, clients do not cache file attributes locally; therefore, clients must check them on the server every time a specific file is accessed (for example, when Data Pump is appending the next few bytes of metadata or Oracle RMAN is changing some bytes during an incremental backup).

     

    Table 17. Mount Options for Oracle RMAN

     

    File SystemMount Options
    File system configured for Oracle RMANrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,forcedirectio

     

    Mount Options for Single-Instance Oracle Database Configuration with Failover

     

    Table 18 lists file systems and mount options for a single-instance Oracle Database configuration with failover. In this environment, the database is deployed in a zone cluster using NAS storage. Oracle Automatic Storage Management (Oracle ASM) is not used.

     

    Table 18. Mount Options for Single-Instance Oracle Database Configuration with Failover

                                  

    File SystemZoneMount Options
    /oracleEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,suid
    /oracle/clientEvery Oracle Database zone; every zone for SAP PAS and AAS instancesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,suid
    /oracle/<SID>/<release>Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,suid
    /oracle/<SID>/origlogAEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/origlogBEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/mirrlogAEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/mirrlogBEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/oraarchEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapreorgEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapdata1Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapdata2Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapdata3Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapdata4Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/saparchEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/oraflashEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapcheckEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/saptraceEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapprofEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/sapbackupEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid
    /oracle/<SID>/cfgtoollogsEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,forcedirectio,proto=tcp,suid

     

    Mount Options for Oracle Database with Oracle RAC

     

    Table 19 lists mount options for Oracle RAC file systems in a zone cluster on NAS storage (Oracle ASM is not used). According to SAP Note 527843, Oracle Clusterware must be installed locally. (Access to SAP Notes requires logon and authentication to the SAP Marketplace.) Therefore, file systems /oracle and /oracle/grid are both located on a local file system and NFS mount options are not required.

     

    Table 19. Mount Options for Oracle Database with Oracle RAC

                                        

    File SystemZoneMount Options
    /oracle/clientEvery Oracle Database zone; every zone for SAP PAS and AAS instancesrw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid
    /oracle/<SID>/<release>Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid
    /oracle/<SID>/origlogAEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/origlogBEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/mirrlogAEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/mirrlogBEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/oraarchEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapreorgEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapdata1Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapdata2Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapdata3Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapdata4Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/saparchEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/oraflashEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapcheckEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/saptraceEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapprofEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/sapbackupEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/cfgtoollogsEvery Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,suid,forcedirectio
    /oracle/<SID>/crs1Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,forcedirectio
    /oracle/<SID>/crs2Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,forcedirectio
    /oracle/<SID>/crs3Every Oracle Database zonerw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=3,proto=tcp,noac,forcedirectio

     

    Tuning the NFS Environment

     

    To optimize NFS performance, there are two post-installation tuning steps:

     

    • Enabling the Direct NFS Client feature of Oracle Database
    • Modifying operating system parameters

     

    Enabling Direct NFS Client

     

    With Oracle Database, instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS servers directly using the Oracle Database Direct NFS Client. Direct NFS Client bypasses the limitations of the operating system cache and performs load balancing on up to four network paths to the NFS server. If a specified path fails, it reissues I/O commands over remaining paths. It is disabled by default with single-instance Oracle Database installations.

     

    For more information on configuring Direct NFS Client, see the post-installation tasks described in the Oracle Database Installation Guide.

     

    To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts that use the recommended mount options previously described in this article. By default, the servers accessed by Direct NFS Client mount entries from /etc/mnttab. To specify additional mounts specific to Oracle Database instances, create or modify the /etc/oranfstab or $ORACLE_HOME/dbs/oranfstab file with entries showing the shares to be accessed on the Oracle ZFS Storage Appliance system.

     

    Note: When the oranfstab file is placed in $ORACLE_HOME/dbs, its entries are specific to all database instances running from that ORACLE_HOME. However, when oranfstab is placed in /etc, then it is global to all instances of Oracle Database in the OS instance (server, LDOM, zone) and can contain mount points for all Oracle Database instances.

     

    For Oracle Database 11.2.0.2 or greater, change the directory to $ORACLE_HOME/rdbms/lib and enter the following command to enable Direct NFS Client. After entering the command, restart the Oracle Database instances.

     

    $ make -f ins_rdbms.mk dnfs_on

    Listing 11: Enabling Direct NFS Client.

     

    To disable the use of Direct NFS Client, enter this command, and then restart the Oracle Database instances:

     

    $ make -f ins_rdbms.mk dnfs_off

    Listing 12: Disabling Direct NFS Client.

     

    Tuning System Parameters

     

    NFS parameters (and other operating system parameters) are set in the /etc/system file located in each domain's global zone, and they are read when Oracle Solaris boots. The size of data blocks typically has a measurable impact on data transfer performance because a value that is too small can generate an unnecessarily large number of small I/O operations.

     

    Before the NFS mount options and modified values for rsize and wsize can take effect, the system NFS tunable parameters must be set accordingly in /etc/system and the domain must be rebooted. There are two NFS parameters to consider: maximum transfer size (nfs:nfs3_max_transfer_size) and logical block size (nfs:nfs3_bsize).

     

    • Maximum transfer size. The nfs:nfs3_max_transfer_size parameter should be set to the default maximum size of data transmitted over the network. In Oracle Solaris 11.2, this default value is 1,048,576 (1 MB), which is the recommended value for the maximum transfer size.

      Note: To reduce the transfer size to a value that is smaller than the default, the mount options rsize and wsize can be set individually on a per-file-system basis.
    • Logical block size. The nfs:nfs3_bsize parameter controls the block size used by the NFS client. The default nfs:nfs3_bsize value in Oracle Solaris 11.2 is 32,768 (32 KB). The recommended logical block size value for this architecture is 1,048,576 (1 MB). To modify the logical block size, add the following line to /etc/system in the global zone and reboot:

      set nfs:nfs3_bsize=1048576

     

    The nfs:nfs3_bsize parameter should be set in conjunction with the nfs:nfs3_max_transfer_size parameter. When increasing the nfs:nfs3_bsize value, increase the value of the nfs:nfs3_max_transfer_size parameter as well. When decreasing the nfs:nfs3_bsize value, it is necessary to decrease only the nfs:nfs3_bsize value.

     

    References

     

    Refer to these resources for more information:

     

    SAP documents:

     

     

    Oracle Database documents:

     

     

    Other documents:

     

     

    About the Authors

     

    Jan Brosowski is a principal sales consultant for Oracle Systems in Europe North. Located in Walldorf, Germany, he is responsible for developing customer-specific architectures and operating models for both SAP and Hyperion systems, accompanying projects from the requirements specification process to going live. Brosowski holds a Master of Business and Engineering degree and has been working for over 15 years in different roles with SAP systems.

     

    Ulrich Conrad is a principal software engineer in the Application Engineering organization located in Walldorf, Germany. He is responsible for the certification of Oracle ZFS Storage Appliance systems in several software environments, especially in the SAP ecosystem. Conrad is involved in design and development to support the SAP Landscape Virtualization Manager using Oracle ZFS Storage Appliance systems, in addition to developing solutions around Oracle ZFS Storage Appliance systems as a central storage repository.

     

    Hans-Juergen Denecke is a principal software engineer in Oracle's Cloud Applications Engineering organization in Germany. He is responsible for certifications of the SAP software stack on Oracle Hardware and Oracle Solaris and other Oracle Software products. Denecke is a graduate engineer in mechanical engineering and has over 20 years in computer industries.

     

    Victor Galis is a master sales consultant, part of the global Oracle Solution Center organization. He supports customers and sales teams architecting SAP environments based on Oracle hardware and technology. He works with SAP Basis and DBA teams, systems and storage administrators, as well as business owners and executives. His role is to understand current environments, business requirements, and pain points as well as future growth and map them to SAP landscapes that meet both performance and high availability expectations. He has been involved with many SAP on Oracle SuperCluster customer environments as an architect and has provided deployment and go-live assistance. Galis is an SAP-certified consultant and Oracle Database administrator.

     

    Gia-Khanh Nguyen is an architect of the Oracle Solaris Cluster product. He contributes to product requirement and design specifications for the support of enterprise solutions on Oracle systems, and develops demonstrations of key features.

     

    Pierre Reynes is a solution manager for Oracle Optimized Solution for SAP and Oracle Optimized Solution for PeopleSoft. He is responsible for driving the strategy and efforts to help raise customer and market awareness for Oracle Optimized Solutions in these areas. Reynes has over 25 years of experience in the computer and networking industries.

     

    Xirui Yang is a principal software engineer in the Oracle's Cloud Applications Engineering organization in Walldorf, Germany.  She is responsible for high availability (HA) SAP systems that use Oracle Solaris Cluster, including codeveloping, installing, testing, and SAP certification for these systems. For the Oracle Optimized Solution for SAP project, Yang drove the installation and testing of HA SAP systems on Oracle Solaris Cluster with Oracle Database (with and without Oracle RAC).