Setting Up Highly Available SAP Systems on Oracle SuperCluster—Part 3: Installing Highly Available SAP with Oracle Solaris Cluster on Oracle SuperCluster

Version 1

    by Jan Brosowski, Victor Galis, Gia-Khanh Nguyen, and Pierre Reynes

     

    This article is Part 3 of a three-part series that describes steps for setting up SAP on Oracle SuperCluster in a highly available configuration. This article provides procedures for installing the SAP ABAP components in zone clusters configured and managed by Oracle Solaris Cluster.

     

    Table of Contents
    Introduction
    Installing and Configuring SAP Components
    Preparing the Environment for SAP Installation
    Installing SAP Components
    Setting Up Zone Cluster Resources for ABAP Stack Instances
    Final Thoughts
    See Also
    About the Authors

     

    Introduction

     

    This article is Part 3 of a three-part series that provides best practices and recommendations for setting up highly available SAP systems on Oracle engineered systems. A team of Oracle engineers and SAP experts used Oracle SuperCluster M7 in a sample deployment to compile and test the step-by-step procedures and recommendations provided in this article series.
    To achieve high availability (HA), it is necessary to put mission-critical SAP components under the control of Oracle Solaris Cluster, creating a cluster of two or more nodes. Oracle Solaris Cluster can then orchestrate failover or disaster recovery strategies while managing infrastructure resources such as database, network connectivity, and shared storage.

    OOS_Logo_small125.png
    in collaboration with
    OSC_Logo_small125.png
    Oracle Optimized Solutions provide tested and proven best practices for how to run software products on Oracle systems. Learn more.

    This article series ("Setting up Highly Available SAP Systems on Oracle SuperCluster") divides software installation and configuration tasks into three articles:

     

    • Part 1: Configuring Virtualization for SAP on Oracle SuperCluster. The first article describes the steps needed to prepare virtual environments on the Oracle SuperCluster M7. These virtual environments are separate, isolated I/O domains that help to improve application resiliency. Two pairs of I/O domains are created on the two physical nodes (Figure 1): one domain pair for SAP components and application servers that require advanced HA, and a second pair for application servers that are not mission-critical.
    • Part 2: Deploying and Configuring Oracle Solaris Cluster for SAP on Oracle SuperCluster. The second article describes the steps for installing the Oracle Solaris Cluster software and configuring two zone clusters across the nodes (Figure 1). The first zone cluster is dedicated to the most-critical SAP components: the ABAP Central Services instance (ASCS) and Enqueue Replication Server instance (ERS). The second zone cluster is used for the SAP Primary Application Server (PAS) and any other mission-critical Additional Application Servers (AAS).
    • Part 3: Installing Highly Available SAP with Oracle Solaris Cluster on Oracle SuperCluster. Part 3 is this article, which gives step-by-step procedures for installing SAP ABAP stack components in the zone clusters and configuring them for HA. Oracle Solaris Cluster implements the concept of logical hostnames that are used to monitor network access availability and, if necessary, can move an IP address (managed as a resource) transparently to other nodes.

     

    f1.png

    Figure 1. An HA configuration for SAP uses the zone clustering capabilities of Oracle Solaris Cluster.

     

    Installing and Configuring SAP Components

     

    To achieve the highest service levels for SAP applications, Oracle Solaris Cluster is used to monitor the status of zone cluster nodes containing the most-critical SAP software components. Part 2 in this series details the procedures used to create two zone clusters:

     

    • pr1-ascs-zc for the SAP ABAP Central Services instance (ASCS) and Enqueue Replication Server instance (ERS)
    • pr1-haapps-zc for the SAP Primary Application Server (PAS) and other application servers that require the highest levels of availability

     

    This article describes the steps for installing the SAP components and configuring logical hostnames, which are shared IP addresses that the Oracle Solaris Cluster software manages as a cluster resource. A logical hostname is defined for each SAP application server that requires the highest service availability levels. Clients configured to access the logical hostname via its IP address are not aware of the node's actual identity, and Oracle Solaris Cluster can transparently move the logical hostname and the corresponding application server across nodes as needed, setting the IP address "down" on the first node and "up" on the other node and restarting the corresponding SAP component.

     

    Preparing the Environment for SAP Installation

     

    Before installing the SAP components, it is necessary to perform the following preliminary steps, which are described in more detail in subsequent sections:

     

    1. Create logical hostnames for the zone clusters. These are the virtual hosts for the ASCS, ERS, PAS, D01, and D02 servers. (D01 and D02 are examples of mission-critical Additional Application Servers [AAS] in this sample deployment.)
    2. Prepare file systems for SAP component installation. On each Oracle Solaris Zone, create /sapmnt/<SID>, /usr/sap/, /usr/SAP/<SID>, /oracle, and other customer-specific file systems, as needed.
    3. Create highly available Oracle Solaris Cluster storage resources that will monitor the NFS-mounted file systems required for the SAP NetWeaver stack.
    4. Create Oracle Solaris projects by following the instructions given in SAP Note 724713 - Parameter Settings for Oracle Solaris 10 and above (access to SAP Notes requires logon and authentication to the SAP Marketplace).
    5. Validate zone cluster operation. Verify that zone clusters operate properly and that the status of a node changes between up and down.

     

    Creating Logical Hostnames for Zone Clusters

     

    The first step is to add network addresses to Oracle Solaris Cluster, configuring IP addresses that will be under Oracle Solaris Cluster control. Select the zone cluster, then select the Solaris Resources tab, and click Add under Network Resources. For all hostnames that are to be managed as logical hostnames, repeat this step, entering all network addresses to be added to the zone cluster, including addresses for both the external 10 GbE and internal InfiniBand (IB) network interfaces (Figure 2).

     

    f2.png

    Figure 2. Enter a network address to be added to the zone cluster.

     

    Repeat this step for all IP addresses managed by Oracle Solaris Cluster (logical hostnames), both external 10 Gbe and internal IB. In our project we used em7pr1-lh, im7pr1-lh, empr1-ers-lh, im7pr1-ers-lh, em7pr1-pas-lh, im7pr1-pas-lh, em7pr1-d01-lh, im7pr1-d01-lh, em7pr1-d02-lh, and im7pr1-d02-lh.

     

    Perform the steps shown in Figure 3 through Figure 11 to create a logical hostname resource for the zone cluster pr1-ascs-zc. This zone cluster has two nodes: em7pr1-ascs-01 and em7pr1-ascs-02. The logical hostname for this zone cluster (im7pr1-lh) is specified in Figure 7. The logical hostname resource is assigned to a resource group (ascs-rg, Figure 9) so that Oracle Solaris Cluster can manage all resources in the group as a unit, relocating all resources as a unit if a failover or switchover is initiated for that group.

     

    Note: SAP limits the length of all hostnames used in a SAP installation to 13 characters or less. The name of the resource is not relevant; only the hostname as resolved by DNS needs to meet this requirement. For this example installation, we initially used a naming convention for one hostname (em7pr1-ascs-lh) that did not meet this requirement, so we later applied a different naming convention just for the ASCS logical hostname (im7pr1-lh, as shown in Figure 7). Because multiple teams are often involved in defining and managing hostnames and corresponding IP addresses, review the entire list of allocated hostnames and IP resources to make sure that they meet SAP requirements. For more details, refer to SAP Note 611361 – "Hostnames of SAP server" (access to SAP Notes requires logon and authentication to the SAP Marketplace).

     

    f3.jpg

    Figure 3. Under Tasks, select the Logical Hostname wizard.

     

    f4.png

    Figure 4. Verify the prerequisites for configuring a logical hostname resource.

     

    f5.png

    Figure 5. Select "Zone cluster" as the logical hostname location.

     

    f6.png

    Figure 6. Select the zone cluster nodes for the logical hostname.

     

    f7.png

    Figure 7. Enter a name for the logical hostname.

     

    f8.png

    Figure 8. Review PNM (Public Network Management) objects (there are none).

     

    f9.png

    Figure 9. Enter a resource group name.

     

    Modify the resource group name to reflect your naming convention; the default value is harder to use.

     

    f10.png

    Figure 10. Review the configuration summary and click Next.

     

    f11.png

    Figure 11. View the results screen, which shows the command-line interface (CLI) commands.

     

    f12.png

    Figure 12. View the created ASCS resource group.

     

    A resource group called ascs-rg (containing the logical hostname resource em7pr1-ascs-lh) is now defined in zone cluster pr1-ascs-zc. Oracle Solaris Cluster can manage all the resources associated with the ASCS server in the ascs-rg resource group.

     

    Logical hostnames need to be created for the InfiniBand (IB) hosts as well. These will be part of the same resource group, so they are active on the same node together. Figure 13 through Figure 17 show the steps to create the resource im7pr1-ascs-lh of type logical hostname in the ascs-rg resource group. To create a new logical hostname resource in the same resource group, click the Create button (Figure 13).

     

    f13.png

    Figure 13. Under Resources, click Create to add a new logical hostname resource for ASCS.

     

    f14.png

    Figure 14. Specify settings for the new resource in the resource group.

     

    f15.png

    Figure 15. Specify dependencies for the resource.

     

    f16.png

    Figure 16. List the network interfaces on each node.

     

    f17.png

    Figure 17. The logical hostname resources are created for the ASCS resource group.

     

    Configure resource groups and logical hostnames as needed for the project, creating one resource group for every SAP component that needs to be managed by Oracle Solaris Cluster. In our project, we configured resource groups ascs-rg and ers-rg in zone cluster pr1-ascs-zc, along with resource groups pas-rg, d01-rg, and d02-rg in zone cluster pr1-haapps-zc, with corresponding logical hostnames.

     

    Prepare the File Systems for SAP Component Installation

     

    On Oracle SuperCluster, you have the option to allocate and configure storage on the internal Oracle ZFS Storage Appliance for SAP file systems such as /sapmnt, /usr/sap, and /usr/sap/trans. For Oracle SuperCluster M7, special care needs to be taken to understand I/O requirements for application file systems because the internal storage appliance is also used to provide root file systems for all Oracle VM Server for SPARC logical domains (LDOMs) and Oracle Solaris Zones. It is recommended to use an external Oracle ZFS Storage Appliance (connected via InfiniBand) for both SAP file systems and backup/recovery. Table 1 shows how local storage was configured in the example installation.

     

    Table 1. Example Storage Appliance Configuration

    HeadProjectFile SystemAppliance Mount PointMounted from Zone
    1PR1sapbackup/export/sapm7/PR1/sapbackupsapm7zdbadm1c1 (192.168.30.51)
    sapm7zdbadm1c2 (192.168.30.52)
    1PR1sapmnt/export/sapm7/PR1/sapmntpr1-ascs-zc (192.168.30.21, 192.168.30.22)
    pr1-haapps-zc (192.168.30.31, 192.168.30.32)
    pr1-aas-01 (192.168.30.41)
    pr1-aas-02 (192.168.30.42)
    1PR1stage/export/sapm7/PR1/stagesapm7zdbadm1c1 (192.168.30.51)
    sapm7zdbadm1c2 (192.168.30.52)
    1PR1usr-sap/export/sapm7/PR1/usr-sappr1-ascs-zc (192.168.30.21, 192.168.30.22)
    1PR1usr-sap-aas-01/export/sapm7/PR1/usr-sap-aas-01pr1-haapps-zc (192.168.30.31, 192.168.30.32)
    1PR1oracle/export/sapm7/PR1/oracleAll + Database

     

    Configure Appliance NFS for Oracle Solaris Cluster HA Storage

     

    The next series of steps create highly available Oracle Solaris Cluster storage for the file systems required for the SAP NetWeaver stack.

     

    Step 1. Create mount points and configure /etc/vfstab.

     

    In the example configuration, zone cluster nodes were created using Oracle Solaris Unified Archives from the source system, and each node had a /etc/vfstab file from the corresponding source node. In the Oracle SuperCluster M7 storage appliance heads, we created the same shares and projects as in the source Oracle ZFS Storage Appliance. For this reason, the content of the /etc/vfstab files requires only a simple host name substitution (Listing 1).

     

    root@em7pr1-ascs-01:~# sed -e 's,izfssa-01,sapm7-h1-storIB,' -e 's,izfssa-02,\
    sapm7-h2-storIB,' -e 's,sapt58,sapm7,' /etc/vfstab > /etc/vfstab.converted
    root@em7pr1-ascs-01:~# mv /etc/vfstab /etc/vfstab.org
    root@em7pr1-ascs-01:~# cp /etc/vfstab.converted /etc/vfstab
     
    root@em7pr1-ascs-02:~# mv /etc/vfstab /etc/vfstab.org
    root@em7pr1-ascs-02:~# scp root@em7pr1-ascs-01:/etc/vfstab /etc
    Password: 
    vfstab               100% |*****************************|  1050       00:00    

    Listing 1: Configuring mounts by modifying /etc/vfstab files on both nodes.

     

    Step 2. Set NFS exceptions in the critical projects.

     

    Oracle Solaris Cluster can fence NFS mounts from nodes that are not active in the cluster (due to a cluster node failure, for example). Fencing prevents rogue processes from modifying or locking shared files, and releases file locks from fenced-out nodes, allowing for a quick restart of failed processes on a different node.

     

    Each zone cluster needs to be added separately to the NFS exception list for each share. The list should contain IP addresses for the ASCS and HAAPP zone cluster nodes, along with those for the Oracle Database zones for the PR1 system.

     

    root@em7pr1-ascs-01:~# l=""
    root@em7pr1-ascs-01:~# for ip in `grep 192.168.28 /export/software/hosts | egrep \
    "pr1|c1-" | egrep -v "\-lh|vip" | awk '{ print $1 }'`; do l="$l""@$ip/32:"; done
    root@em7pr1-ascs-01:~# echo $l
    @192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:@192.168.28.153/32:
    @192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:@192.168.28.161/32:

    Listing 2: Building an NFS exception list.

     

    Copy the string output produced in Listing 2 and paste it into the commands shown in Listing 3, which are executed in each of the respective storage appliance heads. Note that the netmask is 32, so each IP address is treated individually. Each IP address should appear twice in the string (once for the root= property and once for rw= property):

     

    # ssh root@sapm7-h1-storadm
    Password:
    Last login: Wed Jan 25 21:07:17 2017 from 10.129.112.124
    sapm7-h1-storadm:> shares
    sapm7-h1-storadm:shares> select PR1
    sapm7-h1-storadm:shares PR1> set sharenfs="sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:\
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:\
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32"
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32 (uncommitted)
    sapm7-h1-storadm:shares PR1> commit
     
     
    sapm7-h2-storadm:shares ORACLE> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:
    @192.168.28.106/32,rw=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:
    @192.168.28.106/32
    sapm7-h2-storadm:shares ORACLE> set sharenfs="sec=sys,root=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:\
    @192.168.28.106/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:\
    @192.168.28.106/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32"
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:
    @192.168.28.106/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.105/32:@192.168.28.102/32:
    @192.168.28.106/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32 (uncommitted)
    sapm7-h2-storadm:shares ORACLE> commit
    sapm7-h2-storadm:shares ORACLE> done
    sapm7-h2-storadm:shares> select TRANS
    sapm7-h2-storadm:shares TRANS> set sharenfs="sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:\
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:\
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:\
    @192.168.28.161/32"
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32 (uncommitted)
    sapm7-h2-storadm:shares TRANS> commit

    Listing 3: Configuring the NFS exception list on the appliance heads.

     

    Note: The CLI for the internal Oracle ZFS Storage Appliance heads can be reached using IB hostnames from any of the global and non-global zones within Oracle SuperCluster.

     

    Step 3. Mount the file systems in both zone cluster nodes.

     

    Because the zone cluster nodes were created using Unified Archives from the source system, each node inherited mount points from the corresponding source nodes, as shown in Listing 4.

     

    root@em7pr1-ascs-01:~# ls /export/software /sapmnt/PR1 /usr/sap
    /export/software:
     
    /sapmnt/PR1:
     
    /usr/sap:
     
    root@em7pr1-ascs-01:~# mount /export/software
    root@em7pr1-ascs-01:~# mount /sapmnt/PR1     
    root@em7pr1-ascs-01:~# mount /usr/sap
    root@em7pr1-ascs-01:~# ls /usr/sap
    root@em7pr1-ascs-01:~# mkdir /usr/sap/trans
    root@em7pr1-ascs-01:~# mount /usr/sap/trans
    root@em7pr1-ascs-01:~# df -h -F nfs
    Filesystem             Size   Used  Available Capacity  Mounted on
    sapm7-h2-storIB:/export/SOFTWARE/software
                           6.9T    22G       6.9T     1%    /export/software
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt
                           6.8T    31K       6.8T     1%    /sapmnt/PR1
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap
                           6.8T    32K       6.8T     1%    /usr/sap
    sapm7-h2-storIB:/export/sapm7/TRANS/trans
                           6.9T    31K       6.9T     1%    /usr/sap/trans
     
     
    root@em7pr1-ascs-02:~# mount /export/software
    root@em7pr1-ascs-02:~# mount /sapmnt/PR1
    root@em7pr1-ascs-02:~# mount /usr/sap
    root@em7pr1-ascs-02:~# mount /usr/sap/trans
    root@em7pr1-ascs-02:~# df -h -F nfs
    Filesystem             Size   Used  Available Capacity  Mounted on
    sapm7-h2-storIB:/export/SOFTWARE/software
                           6.9T    22G       6.9T     1%    /export/software
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt
                           6.8T    31K       6.8T     1%    /sapmnt/PR1
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap
                           6.8T    32K       6.8T     1%    /usr/sap
    sapm7-h2-storIB:/export/sapm7/TRANS/trans
                           6.9T    31K       6.9T     1%    /usr/sap/trans

    Listing 4: Mounting files systems on both zone cluster nodes.

     

    Step 4. Configure the Oracle Solaris Cluster NFS workflow in the Oracle ZFS Storage Appliance.

     

    Configure the Oracle Solaris Cluster NFS workflow in the storage appliance. Note that the workflow needs to be executed only in the first appliance head (Listing 5).

     

    # ssh root@sapm7-h1-storadm
    Password:
    Last login: Wed Jan 25 21:07:17 2017 from 10.129.112.124
    sapm7-h1-storadm:> maintenance workflows
    sapm7-h1-storadm:maintenance workflows> ls
    Properties:
                        showhidden = false
     
    Workflows:
     
    WORKFLOW     NAME                       OWNER SETID ORIGIN               VERSION     
    workflow-000 Clear locks                root  false Oracle Corporation   1.0.0       
    workflow-001 Configure for Oracle Solaris Cluster NFS root  false Oracle Corporation   1.0.0       
    workflow-002 Unconfigure Oracle Solaris Cluster NFS root  false Oracle Corporation   1.0.0       
    workflow-003 Configure for Oracle Enterprise Manager Monitoring root  false Sun Microsystems, Inc. 1.1         
    workflow-004 Unconfigure Oracle Enterprise Manager Monitoring root  false Sun Microsystems, Inc. 1.0         
     
    sapm7-h1-storadm:maintenance workflows> select workflow-001
    sapm7-h1-storadm:maintenance workflow-001> execute
    sapm7-h1-storadm:maintenance workflow-001 execute (uncommitted)> set password=welcome1
                          password = ********
    sapm7-h1-storadm:maintenance workflow-001 execute (uncommitted)> set changePassword=false
                    changePassword = false
    sapm7-h1-storadm:maintenance workflow-001 execute (uncommitted)> commit
    OSC configuration successfully completed.
    sapm7-h1-storadm:maintenance workflow-001> ls
    Properties:
                              name = Configure for Oracle Solaris Cluster NFS
                       description = Sets up environment for Oracle Solaris Cluster NFS
                              uuid = 92ed26fa-1088-4d4b-ceca-ebad58fc42d7
                          checksum = 15f4188643d7add37b5ad8bda6d9b4e7210f1cd66cd890a73df176382e800aec
                       installdate = 2015-11-25 02:30:55
                             owner = root
                            origin = Oracle Corporation
                             setid = false
                             alert = false
                           version = 1.0.0
                         scheduled = false
     
    sapm7-h1-storadm:> configuration users
    sapm7-h1-storadm:configuration users> ls
    Users:
     
    NAME                     USERNAME                 UID        TYPE
    Super-User               root                     0          Loc
    Oracle Solaris Cluster Agent osc_agent                2000000000 Loc

    Listing 5: Configuring the Oracle Solaris Cluster NFS workflow in the appliance.

     

    We can verify that the defined workflow executed successfully because the user osc_agent was created.

     

    Step 5. Add a NAS device to each zone cluster.

     

    At this point, we are ready to add the Oracle ZFS Storage Appliance as a NAS device for each zone cluster. Add the Oracle ZFS Storage Appliance to Oracle Solaris Cluster. On the Zone Clusters pane, select the zone cluster pr1-ascs-zc, and add the NAS device sapm7-h1-storIB to the zone cluster, as shown in Figure 18 through Figure 22.

     

    f18.png

    Figure 18. Select the zone cluster pr1-ascs-zc and click New NAS Device to add a device.

     

    f19.png

    Figure 19. Specify the IB name of the appliance head and the workflow username and password from Step 4.

     

    f20.png

    Figure 20. Review the summary and click Add.

     

    f21.png

    Figure 21. Validate the file system export list on the Oracle ZFS Storage Appliance head.

     

    In some cases, the export list (Figure 21) will not contain all shared file systems. If this occurs, the export entries can be entered manually or later added as a property. A bug might not allow adding both IP addresses and shared exported file systems at the same time. For this reason, the PR1 project might not be discovered in the wizard (see bug "19982694 - listrwprojects client i/f not listing projects that have extra IPs"). To work around this problem, simply add the IP addresses first and then use Edit (on the Zone Clusters pane) to add the exported file systems.

     

    f22.png

    Figure 22. The wizard reports the status of the new NAS device as OK.

     

    Step 6. Check the resulting NAS configuration.

     

    Verify the NAS configuration:

     

    root@em7pr1-ascs-01:~# clnas show -v -d all
     
     === NAS Devices ===                            
     
    Nas Device:                                     sapm7-h1-storIB
      Type:                                            sun_uss
      userid:                                          osc_agent
      nodeIPs{em7pr1-ascs-02}:                         192.168.28.153
      nodeIPs{em7pr1-ascs-01}:                         192.168.28.152
      Project:                                         supercluster1/local/PR1
        File System:                                      /export/sapm7/PR1/oracle
        File System:                                      /export/sapm7/PR1/sapbackup
        File System:                                      /export/sapm7/PR1/sapmnt
        File System:                                      /export/sapm7/PR1/stage
        File System:                                      /export/sapm7/PR1/usr-sap
        File System:                                      /export/sapm7/PR1/usr-sap-aas-01
        File System:                                      /export/sapm7/PR1/usr-sap-aas-02
        File System:                                      /export/sapm7/PR1/usr-sap-haapps

    Listing 6: Checking the NAS configuration on an ASCS zone cluster node.

     

    Step 7. Add the second storage appliance head.

     

    The second Oracle ZFS Storage Appliance head can also be added using the wizard. The alternate method is to use these CLI commands:

     

    root@em7pr1-ascs-01:~# clnas add -t sun_uss -p userid=osc_agent \
      -p "nodeIPs{em7pr1-ascs-02}"=192.168.28.153 \
      -p "nodeIPs{em7pr1-ascs-01}"=192.168.28.152 \
      sapm7-h2-storIB
    Enter password:  
    root@em7pr1-ascs-01:~# clnas find-dir sapm7-h2-storIB
     
    === NAS Devices ===                            
     
    Nas Device:                                     sapm7-h2-storIB
      Type:                                            sun_uss
      Unconfigured Project:                            supercluster2/local/sc1-ldomfs
      Unconfigured Project:                            supercluster2/local/p_sapm7z0201
      Unconfigured Project:                            supercluster2/local/SOFTWARE

    Listing 7: Adding the second storage appliance head to the ASCS zone cluster using CLI commands.

     

    The TRANS and ORACLE projects might not be discovered because of a bug (see bug "19982694 - listrwprojects client i/f not listing projects that have extra IPs"). To work around this problem, add the exported file system for TRANS as shown in Listing 8 (only TRANS is used by the ASCS zone cluster).

     

    root@em7pr1-ascs-01:~# clnas add-dir -d supercluster2/local/TRANS sapm7-h2-storIB
    root@em7pr1-ascs-01:~# clnas show -v -d all
     
    === NAS Devices ===                            
     
    Nas Device:                                     sapm7-h1-storIB
      Type:                                            sun_uss
      userid:                                          osc_agent
      nodeIPs{em7pr1-ascs-02}:                         192.168.28.153
      nodeIPs{em7pr1-ascs-01}:                         192.168.28.152
      Project:                                         supercluster1/local/PR1
        File System:                                      /export/sapm7/PR1/oracle
        File System:                                      /export/sapm7/PR1/sapbackup
        File System:                                      /export/sapm7/PR1/sapmnt
        File System:                                      /export/sapm7/PR1/stage
        File System:                                      /export/sapm7/PR1/usr-sap
        File System:                                      /export/sapm7/PR1/usr-sap-aas-01
        File System:                                      /export/sapm7/PR1/usr-sap-aas-02
        File System:                                      /export/sapm7/PR1/usr-sap-haapps
     
    Nas Device:                                     sapm7-h2-storIB
      Type:                                            sun_uss
      nodeIPs{em7pr1-ascs-01}:                         192.168.28.152
      nodeIPs{em7pr1-ascs-02}:                         192.168.28.153
      userid:                                          osc_agent
      Project:                                         supercluster2/local/TRANS
        File System:                                      /export/sapm7/TRANS/trans

    Listing 8: Adding the TRANS project to the second appliance head.

     

    Step 8. Check prerequisites for NFS file systems.

     

    Before creating storage resources for the NFS file systems, check that the desired file systems are mounted on both nodes:

     

    root@em7pr1-ascs-01:~# df -h -F nfs
    Filesystem             Size   Used  Available Capacity  Mounted on
    sapm7-h2-storIB:/export/SOFTWARE/software
                           6.9T    22G       6.9T     1%    /export/software
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt
                           6.8T    42K       6.8T     1%    /sapmnt/PR1
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap
                           6.8T    43K       6.8T     1%    /usr/sap
    sapm7-h2-storIB:/export/sapm7/TRANS/trans
                           6.9T    42K       6.9T     1%    /usr/sap/trans
    root@em7pr1-ascs-02:~# df -h -F nfs
    Filesystem             Size   Used  Available Capacity  Mounted on
    sapm7-h2-storIB:/export/SOFTWARE/software
                           6.9T    22G       6.9T     1%    /export/software
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt
                           6.8T    42K       6.8T     1%    /sapmnt/PR1
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap
                           6.8T    43K       6.8T     1%    /usr/sap
    sapm7-h2-storIB:/export/sapm7/TRANS/trans
                           6.9T    42K       6.9T     1%    /usr/sap/trans

    Listing 9: Checking NFS mounts on both nodes.

     

    In addition, check that the file systems to be managed by cluster resources have "no" defined as the mount option for mounting at boot time:

     

    root@em7pr1-ascs-01:~# grep " no " /etc/vfstab
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt         -  /sapmnt/PR1       nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap        -  /usr/sap          nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3
    sapm7-h2-storIB:/export/sapm7/TRANS/trans        -  /usr/sap/trans    nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3
    root@em7pr1-ascs-02:~# grep " no " /etc/vfstab
    sapm7-h1-storIB:/export/sapm7/PR1/sapmnt         -  /sapmnt/PR1       nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3
    sapm7-h1-storIB:/export/sapm7/PR1/usr-sap        -  /usr/sap          nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3
    sapm7-h2-storIB:/export/sapm7/TRANS/trans        -  /usr/sap/trans    nfs  -  no   rw,bg,hard,rsize=32768,wsize=32768,proto=tcp,vers=3

    Listing 10: Checking NFS mount options on both nodes.

     

    Confirm that the corresponding projects have been added (in the previous zone cluster NAS procedure):

     

    root@em7pr1-ascs-01:~# clnas show -v -d all | grep Project
      Project:                                         supercluster1/local/PR1
      Project:                                         supercluster2/local/TRANS

    Listing 11: Checking projects for the zone clusters.

     

    Configuring a Highly Available Storage Resource Group

     

    Using a highly available storage resource can improve the availability of SAP services. In an Oracle Solaris Cluster environment, the resource type ScalMountPoint enables access to highly available NFS file systems. (For information, see "Configuring Failover and Scalable Data Services on Shared File Systems" in the Oracle Solaris Cluster documentation.)

     

    Figure 23 through Figure 31 show how to use the Oracle Solaris Cluster Manager browser interface to create a ScalMountPoint resource for the transport directory /usr/sap/trans. From the Tasks pane, select the Highly Available Storage wizard to begin creating the resource group and resources.

     

    f23.png

    Figure 23. Select the Highly Available Storage wizard from the Tasks menu.

     

    f24.png

    Figure 24. Review the prerequisites.

     

    f25.png

    Figure 25. Specify the zone cluster for the storage resource.

     

    f26.png

    Figure 26. All cluster zones are preselected as zones that can master the HA storage resource.

     

    f27.png

    Figure 27. Select "Shared File System" as the shared storage type, which will be a ScalMountPoint type.

     

     

    In selecting file system mount points (Figure 28), make sure to select all three rows using Crtl-left-click. Press the Return key to get to the next screen.

     

    f28.png

    Figure 28. Select the mount points and press Return. 

     

    Before creating the Oracle Solaris Cluster resources, it is recommended to shorten the resource names (listed in the resource name column) by editing each name.

     

    f29.png

    Figure 29. Review the settings for the ScalMountPoint resource.

     

     

    Navigate to the resource scal-usr-sap-trans-rs and check that there is an offline restart resource dependency set on the scal-usr-sap-rs resource. This is because there is a nested mount point: /usr/sap/trans is mounted after /usr/sap is mounted. Setting this dependency enforces that order (the wizard automatically creates this dependency because it detects this relationship).

     

    f30.png

    Figure 30. Check properties for scal-usr-sap-trans-rs.

     

    f31.png

    Figure 31. Verify the dependency for scal-usr-sap-trans-rs.

     

    Creating Oracle Solaris Projects

     

    Oracle Solaris projects can be created by following the instructions in SAP Note 724713 - Parameter Settings for Oracle Solaris 10 and above (access to SAP Notes requires logon and authentication to the SAP Marketplace). In the example installation, Oracle Solaris Zones were installed using Unified Archives. Thus, entries for the required projects were created during zone installation.

     

    Verify that the file /etc/project has an entry for project user.root and an entry for project PR1 (the SAP SID) in the zone where the SAP application servers will be installed:

     

    #cat /etc/project
    system:0::::
    user.root:1::::process.max-file-descriptor=(basic,65536,deny);process.max-sem-nsems=
    (priv,4096,deny);project.max-sem-ids=(priv,2048,deny);project.max-shm-ids=
    (priv,2048,deny);project.max-shm-memory=(priv,18446744073709551615,deny)

    noproject:2::::
    default:3::::
    group.staff:10::::
    PR1:700:SAP System PR1:pr1adm::process.max-file-descriptor=(basic,65536,deny);
    process.max-sem-nsems=(priv,4096,deny);project.max-sem-ids=(priv,3072,deny);
    project.max-shm-ids=(priv,2048,deny);project.max-shm-memory=(priv,18446744073709551615,deny)

    Listing 12: Verifying project entries.

     

    If Unified Archives were not used to install the zone, add the project values to /etc/project manually. Project entries are not required for installing (A)SCS and ERS.

     

    Verifying Zone Cluster Operation

     

    This section provides steps to validate that the zone clusters operate properly when the status of a node changes from up or down.

     

    Validate the operation of the first zone cluster node. First check the initial state of the zone cluster resource groups and the sharenfs parameters of the PR1 and TRANS projects on the Oracle ZFS Storage Appliance:

     

    root@em7pr1-ascs-02:~# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Offline
     
    ers-rg         em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Offline
     
    scalmnt-rg     em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Online
     
    sapm7-h1-storadm:shares PR1> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:@192.168.28.153/32:
    @192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:@192.168.28.161/32,
    rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:@192.168.28.153/32:
    @192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:@192.168.28.161/32
     
    sapm7-h2-storadm:shares TRANS> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:@192.168.28.153/32:
    @192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:@192.168.28.161/32,
    rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:@192.168.28.153/32:
    @192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:@192.168.28.161/32

    Listing 13: Checking initial status.

     

    In a terminal window on node 2, observe syslog messages:

     

    root@sapm7adm-haapp-0201:~# tail -f /var/adm/messages | \
    egrep "Success|died|joined|RG_OFFLINE|RG_ONLINE"

    Listing 14: Monitoring syslog messages on node 2.

     

    Manually fail the first zone cluster node (Listing 15).

     

    root@em7pr1-ascs-01:~# date; uadmin 1 0
    Fri Mar 11 14:58:01 PST 2016
    uadmin: can't turn off auditd
     
    [Connection to zone 'pr1-ascs-zc' pts/2 closed]

    Listing 15: Forcing the failure of the ASCS zone cluster node.

     

    As shown in the syslog output (Listing 16), both resource groups ascs-rg and ers-rg fail over to node 2. As a result, the status of these groups changes to RG_OFFLINE on node1 and RG_ONLINE on node 2:

     

    Mar 11 14:58:36 sapm7adm-haapp-0201 {fence=Success., 0={message=Success., name=supercluster2/local/TRANS}}
    Mar 11 14:58:36 sapm7adm-haapp-0201 {fence=Success., 0={message=Success., name=supercluster1/local/PR1}}
    Mar 11 14:58:37 sapm7adm-haapp-0201 cl_runtime: [ID 848921 kern.notice] NOTICE: Membership: 
    Node 'em7pr1-ascs-01' (node id 2) of cluster 'pr1-ascs-zc' died.
    Mar 11 14:58:37 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] 
    resource group scalmnt-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 14:58:37 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] 
    resource group ers-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 14:58:37 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] 
    resource group ascs-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 14:58:38 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] 
    resource group ers-rg state on node em7pr1-ascs-02 change to RG_ONLINE
    Mar 11 14:58:39 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] 
    resource group ascs-rg state on node em7pr1-ascs-02 change to RG_ONLINE

    Listing 16: Observing resource group failover.

     

    Check the NFS export status of the PR1 and TRANS projects. Note that the command must be entered twice because the first time it uses the previously cached value. The status of ro=@192.168.28.152/32 means that the first zone cluster node has been fenced off for NFS write access (and that IP address has been removed from the rw list).

     

    sapm7-h1-storadm:shares PR1> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32
    sapm7-h1-storadm:shares PR1> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,ro=@192.168.28.152/32,rw=@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32

    Listing 17: Observing NFS export status of PR1 and TRANS projects (appliance head 1).

     

    sapm7-h2-storadm:shares TRANS> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32
    sapm7-h2-storadm:shares TRANS> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,ro=@192.168.28.152/32,rw=@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32

    Listing 18: Observing NFS export status of PR1 and TRANS projects (appliance head 2).

     

    root@em7pr1-ascs-02:~# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    ers-rg         em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    scalmnt-rg     em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online

    Listing 19: Checking status.

     

    Reboot the first node:

     

    root@sapm7adm-haapp-0101:~# clzc boot -n sapm7adm-haapp-0101 pr1-ascs-zc
    Waiting for zone boot commands to complete on all the nodes of the zone cluster "pr1-ascs-zc"...

    Listing 20: Rebooting node 1.

     

    Check the syslog messages and recheck the NFS export status of the PR1 and TRANS projects. The IP address 192.168.28.152/32 should now appear in the rw list and is no longer in the ro list:

     

    Mar 11 15:02:46 sapm7adm-haapp-0201 {0={message=Success., name=supercluster2/local/TRANS}}
    Mar 11 15:02:46 sapm7adm-haapp-0201 {0={message=Success., name=supercluster1/local/PR1}}
    Mar 11 15:02:47 sapm7adm-haapp-0201 cl_runtime: [ID 564910 kern.notice] NOTICE: Membership: Node 'em7pr1-ascs-01' (node id 2) of cluster 'pr1-ascs-zc' joined.
    Mar 11 15:02:47 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] resource group scalmnt-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 15:02:47 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] resource group ers-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 15:02:47 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] resource group ascs-rg state on node em7pr1-ascs-01 change to RG_OFFLINE
    Mar 11 15:03:03 sapm7adm-haapp-0201 Cluster.RGM.pr1-ascs-zc.rgmd: [ID 529407 daemon.notice] resource group scalmnt-rg state on node em7pr1-ascs-01 change to RG_ONLINE

    sapm7-h1-storadm:shares PR1> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:\
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,ro=@192.168.28.152/32,rw=@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32
    sapm7-h1-storadm:shares PR1> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.152/32:@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32

    sapm7-h2-storadm:shares TRANS> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,ro=@192.168.28.152/32,rw=@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32
    sapm7-h2-storadm:shares TRANS> get sharenfs
                          sharenfs = sec=sys,root=@192.168.28.101/32:@192.168.28.102/32:@192.168.28.152/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32,rw=@192.168.28.152/32:@192.168.28.101/32:@192.168.28.102/32:
    @192.168.28.153/32:@192.168.28.154/32:@192.168.28.155/32:@192.168.28.160/32:
    @192.168.28.161/32

    root@em7pr1-ascs-02:~# clrg status

    === Cluster Resource Groups ===

    Group Name    Node Name          Suspended    Status
    ----------    ---------          ---------    ------
    ascs-rg        em7pr1-ascs-01    No            Offline
                  em7pr1-ascs-02    No            Online

    ers-rg        em7pr1-ascs-01    No            Offline
                  em7pr1-ascs-02    No            Online

    scalmnt-rg    em7pr1-ascs-01    No            Online
                  em7pr1-ascs-02    No            Online

    Listing 21: Rechecking status after rebooting node 1.

     

    Validate the operation of the second zone cluster node. Repeat the same sequence of verification steps, faulting the second zone cluster node and subsequently rebooting it.

     

    Installing SAP Components

     

    At this point, the infrastructure is ready for SAP software components to be installed. In the example deployment, the SAP software files are available at /net/sapm7-h1-storIB/export/sapt58/SOFTWARE/software. This directory is a share on head 1 of the local Oracle ZFS Storage Appliance (sapm7-h1-storIB) on the destination system. This share is a replica of the SOFTWARE share on the appliance on the source system used to install the SAP software initially. Thus, all software downloaded and prepared on the source SPARC T5-8-based system from Oracle is also available on the internal Oracle ZFS Storage Appliance in the destination Oracle SuperCluster M7 engineered system.

     

    f32.png

    Figure 32. SOFTWARE project on the appliance.

     

    The SAP Software Provisioning Manager (sapinst) can use the logical hostname specified by the SAPINST_USE_HOSTNAME parameter to install components on the destination system. To take advantage of higher bandwidth offered by the InfiniBand network within the Oracle SuperCluster M7, the installation uses the InfiniBand (IB) connections.

     

    Step 1. Prepare /etc/hosts with InfiniBand IP addresses.

     

    Add the same IB IP address information to the /etc/hosts file on all zones:

     

    192.168.28.152  im7pr1-ascs-01
    192.168.28.153  im7pr1-ascs-02
    192.168.28.154  im7pr1-haapps-01
    192.168.28.155  im7pr1-haapps-02
    192.168.28.156  im7pp1-scs-01
    192.168.28.157  im7pp1-scs-02
    192.168.28.158  im7pp1-haapps-01
    192.168.28.159  im7pp1-haapps-02
    192.168.28.160  im7pr1-aas-01
    192.168.28.161  im7pr1-aas-02
    192.168.28.162  im7pp1-aas-01
    192.168.28.163  im7pp1-aas-02
    192.168.28.164  im7pr1-lh.us.oracle.com im7pr1-lh im7pr1-ascs-lh
    192.168.28.165  im7pr1-ers-lh.us.oracle.com im7pr1-ers-lh
    192.168.28.166  im7pr1-pas-lh.us.oracle.com im7pr1-pas-lh
    192.168.28.167  im7pr1-aas01-lh
    192.168.28.168  im7pr1-aas02-lh
    192.168.28.169  im7pp1-scs-lh
    192.168.28.170  im7pp1-ers-lh
    192.168.28.171  im7pp1-pas-lh
    192.168.28.172  im7pp1-aas01-lh
    192.168.28.173  im7pp1-aas02-lh
    192.168.28.1    sapm7-h1-storIB192.168.28.2    sapm7-h2-storIB

    Listing 22: Adding IB IP addresses to /etc/hosts on all zones.

     

    Notice that only the logical hostnames have fully qualified names in the definition. This is required by sapinst. The hostname for ASCS was shortened to im7pr1-lh (instead of im7pr1-ascs-lh, which is one character too long).

     

    Step 2. Confirm that the zone cluster resources are online.

     

    ASCS and ERS are installed in one of the zones of the pr1-ascs-zc zone cluster and PAS is installed in one of the zones of the pr1-haapps-zc zone cluster. Before starting the installation, verify that all related resources are online, making sure that:

     

    • The logical hostname is running on the node being installed
    • The scalmnt resources are online

     

    Checking the PAS zone cluster resources (Listing 23) shows that the resource group pas_rg is active on node 1. This means we should perform the PAS installation on node 1.

     

    root@em7pr1-haapps-01:/export/software/INST/PR1notes# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name           Suspended    Status
    ----------     ---------           ---------    ------
    pas-rg         em7pr1-haapps-01    No           Online
                   em7pr1-haapps-02    No           Offline
     
    scalmnt-rg     em7pr1-haapps-01    No           Online
                   em7pr1-haapps-02    No           Online

    Listing 23: Checking which zone cluster node is online for the PAS resource group.

     

    Note that a clrg switch command could switch pas-rg to node 1 if necessary:

     

    # clrg switch -n em7pr1-haapps-01 pas-rg

    Listing 24: Switching the PAS resource group to node 1.

     

    Step 3. Install Oracle Solaris packages to run the sapinst installer.

     

    The installer sapinst can be used with the GUI running locally or with the GUI running on a remote server and connecting to the sapinst server running on the current host.

     

    By default, Oracle Solaris is initially installed with the minimum set of required packages. Because the X11 packages are not included in a standard minimized Oracle Solaris installation, it is not possible (by default) to display an X-Windows client application (such as the graphical sapinst client) remotely to another host. To be able run the sapinst client locally to install the SAP instances, either install the Oracle Solaris desktop package group (if it has not already been installed) or add these individual packages:

     

    • xauth—required to allow ssh to set up X11 forwarding with authentication
    • x11/diagnostic/x11-info-clients—required for software that executes xdpyinfo
    • library/motif—required for software that has a Motif GUI
    • terminal/xterm—required for testing or to use xterm

     

    Step 4. Start the sapinst installer to install ASCS.

     

    To install ASCS, create a temporary installation directory, change the current directory to the newly created directory, and run the command in Listing 25.

     

    # /export/software/t58software/sap/SWPM/sapinst SAPINST_USE_HOSTNAME=im7pr1-lh

    Listing 25: Starting the sapinst installer to install ASCS.

     

    As shown in Figure 33, we picked the System Copy option to install ASCS.

     

    f33.png

    Figure 33. Select ASCS Instance from the SAP installation screen.

     

    Table 2 shows the settings for installing ASCS for system PR1 in the example deployment. A file named summary.html captures these settings, is created in the installation directory before starting the actual installation, and shows all user input in the sapinst screens. For more detailed information on how to install SAP on Oracle SuperCluster, see the "Oracle Optimized Solution for SAP on the Oracle Technology Network."

     

    Table 2. Settings for ASCS Installation                              

    Dialog "General SAP System Parameters"
    SAP System ID (SAPSID)PR1
    SAP Mount Directory/sapmnt
    Unicode System (Recommended)selected
    Dialog "DNS Domain Name"
    Set FQDN for SAP systemselected
    Set FQDN for SAP systemselected
    Dialog "Media Browser"
    Dialog "Master Password"
    Password for All Users******
    Dialog "ASCS Instance"
    ASCS Instance Number00
    ASCS Instance Virtual Hostim7pr1-lh
    Dialog "ABAP Message Server Ports"
    ABAP Message Server Port3600
    Internal ABAP Message Server Port3900
    Dialog "Unpack Archives"
    CodepageDestinationDownloaded ToArchiveUnpack
    Unicode/usr/sap/PR1/SYS/exe/uc/sun_64DBINDEP/SAPEXE.SARchecked

     

    Step 5. Start the sapinst installer to install ERS.

     

    To install ERS, create a temporary directory, change the current directory to the newly created directory, and run the command in Listing 26.

     

    # /export/software/t58software/sap/SWPM/sapinst SAPINST_USE_HOSTNAME=im7pr1-ers-lh

    Listing 26: Starting the sapinst installer to install ERS.

     

    Table 3 shows the settings for ERS installation.

     

    Table 3. Settings for ERS Installation                              

    Dialog "General SAP System Parameters"
    Profile Directory/usr/sap/PR1/SYS/profile
    Dialog "Enqueue Replication Server Instance"
    Instance NameInstance HostInstall ERS InstanceSAP System ID
    ASCS00im7pr1-lhcheckedPR1
    Dialog "Media Browser"
    Package LocationCheck LocationMedium
    /net/sapm7-h1-storIB/export/sapt58/SOFTWARE/software/sap/ERP_6.07/Kernel_7.42/DATA_UNITS/K_742_U_SOLARIS_SPARCcheckedUC Kernel NW740 SR2
    Dialog "Enqueue Replication Server Instance"
    Name of the (A)SCS Instance to be ReplicatedASCS00
    Number of the (A)SCS Instance to be Replicated00
    Number of the ERS Instance10
    ERS Instance - Virtual Host Nameim7pr1-ers-lh
    Dialog "Activate Changes"
    (A)SCS Instance NameASCS00
    (A)SCS Instance Hostim7pr1-lh
    Automatic Instance and Service Restartselected

     

    Step 6. Start the sapinst installer to install PAS.

     

    To install PAS, create a temporary directory, change the current directory to the newly created directory, and run the command in Listing 27.

     

    # /export/software/t58software/sap/SWPM/sapinst SAPINST_USE_HOSTNAME=im7pr1-pas-lh

    Listing 27: Starting the sapinst installer to install PAS.

     

    Table 4 shows the settings for PAS installation.

     

    Table 4. Settings for PAS Installation                                                                                                                                                  

    Dialog "General SAP System Parameters"
    Profile Directory/sapmnt/PR1/profile
    Dialog "Master Password"
    Password for All Users******
    Dialog "Media Browser"
    Package LocationMedium
    /net/sapm7-h1-storIB/export/sapt58/SOFTWARE/software/sap/ERP_6.07/Kernel_7.42/DATA_UNITS/K_742_U_SOLARIS_SPARCUC Kernel NW740 SR2
    Dialog "SAP System Database"
    Database ID (DBSID)PR3
    Database Hostsapm7zdb1c1-ib-vip
    Database on Oracle Real Application Clusters (Oracle RAC)deselected
    Dialog "Oracle Network Client Configuration"
    ABAP SchemaSAPSR3
    DB Client Version121
    ListenerLISTENER
    Listener Port1521
    DomainWORLD
    Keep listener.oraselected
    Keep tnsnames.oraselected
    Dialog "Primary Application Server Instance"
    PAS Instance Number00
    PAS Instance Virtual Hostim7pr1-pas-lh
    Dialog "ABAP Message Server Ports"
    ABAP Message Server Port3600
    Internal ABAP Message Server Port3900
    Dialog "ICM User Management"
    Password of 'webadm'******
    Dialog "SLD Destination for the SAP System OS Level"
    selected
    Use HTTPSdeselected
    SLD Host
    SLD HTTP(S) Port
    SLD Data Supplier User
    Password of SLD Data Supplier User******
    Dialog "Message Server Access Control List"
    selected
    Dialog "Actions Before SAP System Start"
    Interrupt before starting the SAP systemdeselected
    Dialog "Depooling Option"
    Execute ABAP reports for depoolingdeselected
    Dialog "SAP System DDIC Users"
    Password of DDIC in Client 000 in the Source System******
    Dialog "Secure Storage Key Generation"
    selected
    Dialog "Media Browser"
    Package LocationMedium
    net/sapm7-h1-storIB/export/sapt58/SOFTWARE/software/sap/Oracle_Client_12.1.0.2/OCL_SOLARIS_SPARCOracle Client 121
    Dialog "Unpack Archives"
    CodepageDestinationDownloaded ToArchiveUnpack
    Unicode/usr/sap/PR1/SYS/exe/uc/sun_64DBINDEP/SAPEXE.SARunchecked
    Unicode/usr/sap/PR1/SYS/exe/uc/sun_64ORA/SAPEXEDB.SARchecked
    Unicode/usr/sap/PR1/SYS/exe/uc/sun_64DBINDEP/IGSEXE.SARchecked
    Unicode/usr/sap/PR1/DVEBMGS00DBINDEP/IGSHELPER.SARchecked
    /oracle/client/12xOCL12164.SARchecked
    Dialog "Install Diagnostics Agent"
    Install Diagnostics Agentdeselected

     

    After completing these steps, the SAP components are installed and running. The ASCS and ERS servers are running on the same node and have different SAP instance numbers: 00 and 10.

     

    Testing SAP Startup and Database Connectivity

     

    The SAP installation was performed assuming a single-instance Oracle Database. In this case, all SAP application servers point to database instance PR3 running on sapm7zdb1c1-ib-vip, the first database domain (DB). If the single-instance database is converted to an Oracle RAC instance before the SAP component installation, sapinst asks for additional information: the scan name and the names of the Oracle RAC nodes. A service is created, and you are instructed to run a script (Listing 28), which is generated by sapinst on the DB domain for each application server.

     

    Note: For step-by-step instructions on how to convert a single-instance SAP database to an Oracle RAC implementation, see the article "Converting Single-Instance Oracle Databases for SAP to Oracle RAC".

     

    Step 1. Run the generated script for the database instance.

     

    #!/bin/sh
    #Generated shell script to create Oracle RAC services on database host.
    #Login as the owner of the oracle database software (typically as user 'oracle') on the database host.
    #Set the $ORACLE_HOME variable to the home location of the database.
    #
    $ORACLE_HOME/bin/srvctl add service  -db PR3 -service PR3_DVEBMGS00 -preferred PR3001 \
    -available PR3002 -tafpolicy BASIC -policy AUTOMATIC  -notification TRUE  \
    -failovertype SELECT  -failovermethod BASIC  -failoverretry 3  -failoverdelay 5
    $ORACLE_HOME/bin/srvctl start service  -db PR3 -service PR3_DVEBMGS00

    Listing 28: Generated script to create Oracle RAC services on database host.

     

    Step 2. Update environment variables.

     

    Because the zones were created using Unified Archives and the installation was done on only one of the zones, the "." files on the second zone have environment variables set incorrectly. Listing 29 shows the files that need to be updated:

     

    .bashrc
    .cshrc
    .dbenv.csh
    .dbenv.sh
    .dbenv_em7pr1-haapps-01.csh
    .dbenv_em7pr1-haapps-01.sh
    .dbenv_epr1-haapps-01.csh
    .dbenv_epr1-haapps-01.sh
    .login
    .profile
    .sapenv.csh
    .sapenv.sh
    .sapenv_em7pr1-haapps-01.csh
    .sapenv_em7pr1-haapps-01.sh
    .sapenv_epr1-haapps-01.csh
    .sapenv_epr1-haapps-01.sh
    .sapsrc.csh
    .sapsrc.sh
    .sapsrc_em7pr1-haapps-01.csh
    .sapsrc_em7pr1-haapps-01.sh
    .sapsrc_epr1-haapps-01.csh
    .sapsrc_epr1-haapps-01.sh

    Listing 29: Files to be updated with environment variables.

     

    In the example deployment, it is necessary to update the name of the files to reflect the new hostnames and change the database SID from PR1 to PR3. (The example SAP installation on Oracle SuperCluster M7 uses PR3 as the database SID.)

     

    To develop the entire series of articles in which we tested multiple migration methods("Best Practices for Migrating SAP Systems to Oracle Infrastructure"), we used three different database SID values:

     

    • PR2: SID used to test the Oracle RMAN DUPLICATE method (that migrates from an active database)
    • PR3: SID used to test the Transportable Tablespaces method
    • PR4: SID used to test the Oracle RMAN Cross-Platform BACKUP and RESTORE method

     

    We installed SAP pointing to the PR3 database. To test all three databases for the different migration methods (PR2, PR3, and PR4), we made changes only to the tnsnames.ora file in /usr/sap/PR1/SYS/profile/oracle, and kept dbs_ora_tnsname as PR3. To connect to PR2, for example, we must change the value of SERVICE_NAME in Listing 30 from PR3 to PR2:

     

    PR3.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ENABLE=broken)
          (FAILOVER=on)
          (load_balance=off)
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = sapm7zdb1c1-ib-vip)
            (PORT = 1521)
          )
          (ADDRESS =
            (COMMUNITY = SAP.WORLD)
            (PROTOCOL = TCP)
            (HOST = sapm7zdb2c1-ib-vip)
            (PORT = 1521)
          )
        )
        (CONNECT_DATA =

                (SERVICE_NAME = PR3) => change to PR2

           (FAILOVER_MODE = (TYPE=SELECT)(METHOD=BASIC))
        )
      ) 

    Listing 30: Changing tnsnames.ora for different SIDs. 

     

    Step 3. Test SAP startup on all zone clusters for the database SID.

     

    Start the SAP instance on each node of the corresponding zone cluster. Perform the following three commands to test SAP startup for ASCS, ERS, and PAS (run the startsap and stopsap commands as user pr1adm, and run clrg as the user root):

     

    • stopsap—Run on the current node.
    • clrg switch -n <node2> <resource_group>—Migrate the resource group containing the logical hostname to the new node.
    • startsap—Run on node2.

     

    Setting Up Zone Cluster Resources for ABAP Stack Instances

     

    This section describes the steps to put the ABAP stack instances (ASCS, ERS, and PAS servers) under Oracle Solaris Cluster management.

     

    Step 1. Create a local directory for hostctrl.

     

    The directory /usr/sap is shared between all zones in a zone cluster, but the directory /usr/sap/hostctrl needs to be local to each zone. Because we migrated the entire zone from the source system, /usr/local/sap already exists. The commands in Listing 31 and Listing 32 are necessary to create a local version of /usr/sap/hostctrl on both nodes:

     

    # cd /usr/local/sap
    # mv sap sap.t58
    # mkdir sap
    # cp -r -p /usr/sap/hostctrl  /usr/local/sap/
    # cd /usr/sap
    # mv hostctrl hostctrl.orig
    # ln -s /usr/local/sap/hostctrl /usr/sap/hostctrl

    Listing 31: Creating /usr/sap/local on node 1.

     

    # cd /usr/local/sap
    # mv sap sap.t58
    # mkdir sap
    # cp -r -p /usr/sap/hostctrl.orig /usr/local/sap/

    Listing 32: Creating /usr/sap/local on node 2.

     

    Step 2. Rename S90sapinit from /etc/rc3.d (the host prompt is not shown).

     

    # cd /etc/rc3.d
    # ls
    README          S90sapinit      S90sapinit.old
    # mv S90sapinit S90sapinit.old

    Listing 33: Renaming S90sapinit from /etc/rc3.d.

     

    Step 3. Modify the start profile.

     

    Modify the start profile and/or instance profile to ensure that the SAP message server and enqueue replication server are restarted by sapstartsrv, but not by the enqueue server.

     

    # su - pr1adm
    # cdpro
    # vi PR1_ASCS00_im7pr1-lh
    #-----------------------------------------------------------------------
    # Start SAP message server
    #-----------------------------------------------------------------------
    _MS = ms.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME)
    Execute_02 = local rm -f $(_MS)
    Execute_03 = local ln -s -f $(DIR_EXECUTABLE)/msg_server$(FT_EXE) $(_MS)
    Restart_Program_00 = local $(_MS) pf=$(_PF)
    #-----------------------------------------------------------------------
    # Start SAP enqueue server
    #-----------------------------------------------------------------------
    _EN = en.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME)
    Execute_04 = local rm -f $(_EN)
    Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enserver$(FT_EXE) $(_EN)
    #Restart_Program_01 = local $(_EN) pf=$(_PF)
    Start_Program_01 = local $(_EN) pf=$(_PF)

    Listing 34: Editing the instance profile.

     

    Step 4. Register SAP-specific resource types.

     

    SAP-specific agents are implemented as resource types in Oracle Solaris Cluster and are supplied with Oracle Solaris Cluster software. The resource types are made available during the installation process, but they must be registered. Once registered, they are available in the zone clusters and in the global zone of each node. Resource types are registered as needed.

     

    root@em7pr1-haapps-01:~# clrt list
    SUNW.LogicalHostname:5
    SUNW.SharedAddress:3
    SUNW.ScalMountPoint:4
    ORCL.oracle_external_proxy
    root@em7pr1-haapps-01:~# clrt register ORCL.sapstartsrv
    root@em7pr1-haapps-01:~# clrt register ORCL.sapcentr
    root@em7pr1-haapps-01:~# clrt register ORCL.saprepenq
    root@em7pr1-haapps-01:~# clrt register ORCL.saprepenq_preempt
    root@em7pr1-haapps-01:~# clrt list
    SUNW.LogicalHostname:5
    SUNW.SharedAddress:3
    SUNW.ScalMountPoint:4
    ORCL.oracle_external_proxy
    ORCL.sapstartsrv:2
    ORCL.sapcentr:2
    ORCL.saprepenq:2
    ORCL.saprepenq_preempt:2

    Listing 35: Registering SAP-specific resource types with Oracle Solaris Cluster.

     

    Step 5. Put the ASCS and ERS servers under Oracle Solaris Cluster management.

     

    The commands in Listing 36 create resources and affinities to manage the SAP ASCS and ERS instances using Oracle Solaris Cluster (the commands can be run from any of the zone cluster nodes). Notice the resource group created for the corresponding logical hostname is used in resource creation.

     

    #ASCS resources

    # clrs create -d -g ascs-rg -t ORCL.sapstartsrv \

      -p SID=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=00 \
      -p instance_name=ASCS00 \
      -p host=im7pr1-lh \
      -p child_mon_level=5 \
      -p resource_dependencies_offline_restart=scal-usr-sap-rs,scal-sapmnt-PR1-rs \
      -p timeout_return=20 \
      ascs-startsrv-rs
     

    # clrs create -d -g ascs-rg -t ORCL.sapcentr \

      -p SID=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=00 \
      -p instance_name=ASCS00 \
      -p host=im7pr1-lh \
      -p retry_count=0 \
      -p resource_dependencies=ascs-startsrv-rs \
      -p resource_dependencies_offline_restart=scal-usr-sap-rs,scal-sapmnt-PR1-rs \
      -p yellow=20 \
      ascs-rs
     
    #ERS resources

    # clrs create -d -g ers-rg -t ORCL.sapstartsrv \

      -p SID=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=10 \
      -p instance_name=ERS10 \
      -p host=im7pr1-ers-lh \
      -p child_mon_level=5 \
      -p resource_dependencies_offline_restart=scal-usr-sap-rs,scal-sapmnt-PR1-rs \
      -p timeout_return=20 \
      ers-startsrv-rs
     

    # clrs create -d -g ers-rg -t ORCL.saprepenq \

      -p sid=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=10 \
      -p instance_name=ERS10 \
      -p host=im7pr1-ers-lh \
      -p resource_dependencies=ers-startsrv-rs \
      -p resource_dependencies_offline_restart=scal-usr-sap-rs,scal-sapmnt-PR1-rs \
      -p yellow=20 \
      ers-rs
     

    # clrs create -d -g ers-rg -t ORCL.saprepenq_preempt \

      -p sid=PR1 \
      -p sap_user=pr1adm \
      -p repenqres=ers-rs \
      -p enq_instnr=00 \
      -p resource_dependencies_offline_restart=ascs-rs \
      preempter-rs
     
    #Weak affinity: ASCS rest on ERS
    # clrg set -p RG_affinities=+ers-rg ascs-rg
    # clrg show -p RG_affinities ascs-rg
     
    #Positive affinity to storage rg
    # clrg set -p RG_affinities+=++scalmnt-rg ascs-rg
    # clrg show -p RG_affinities ascs-rg
    # clrg set -p RG_affinities+=++scalmnt-rg ers-rg
    # clrg show -p RG_affinities ers-rg
     
    #Ping-pong interval 10 minutes for testing
    # clrg set -p pingpong_interval=600 ascs-rg
    # clrg set -p pingpong_interval=600 ers-rg
    # clrs enable +

    Listing 36: Configuring ASCS and ERS resource groups to be managed by Oracle Solaris Cluster.

     

    After configuring the ASCS and ERS resource groups, check the status:

     

    root@em7pr1-ascs-01:/usr/sap# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Pending_online
     
    ers-rg         em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Pending_online
     
    scalmnt-rg     em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Online
     
    root@em7pr1-ascs-01:/usr/sap# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    ers-rg         em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    scalmnt-rg     em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Online
     
    root@em7pr1-ascs-01:/usr/sap# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    ers-rg         em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Pending_offline
     
    scalmnt-rg     em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Online
     
    root@em7pr1-ascs-01:/usr/sap# clrg status
     
    === Cluster Resource Groups ===
     
    Group Name     Node Name          Suspended     Status
    ----------     ---------          ---------     ------
    ascs-rg        em7pr1-ascs-01     No            Offline
                   em7pr1-ascs-02     No            Online
     
    ers-rg         em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Offline
     
    scalmnt-rg     em7pr1-ascs-01     No            Online
                   em7pr1-ascs-02     No            Online

    Listing 37: Checking the status after configuration.

     

    Step 6. Configure Oracle Solaris Cluster HA for External Proxy.

     

    Oracle Solaris Cluster provides the HA for Oracle External Proxy resource type. This resource type interrogates the Oracle Database or Oracle RAC service and interprets the availability of that service as a part of an Oracle Solaris Cluster configuration. The HA for Oracle External Proxy resource type can monitor both Oracle RAC databases and single-instance databases. In the context of HA requirements, it is recommended to convert single-instance databases to Oracle RAC by following the instructions in the article "Converting Single-Instance Oracle Databases for SAP to Oracle RAC," which covers the setup of an Oracle External Proxy resource for both a single-instance database and Oracle RAC. In both cases, we assume that the database will be run on multiple nodes, either because we manually switch a single-instance database from one node to another, or because there are multiple instances on different nodes for an Oracle RAC database. There are five key steps for creating an Oracle External Proxy resource:

     

    • Create the remote database user.
    • Set up the secure remote database password.
    • Create the tnsnames.ora file.
    • Configure the remote Oracle Notification Service.
    • Create the Oracle External Proxy resource.

     

    After creating the Oracle External Proxy resource, a few additional steps (such as putting the PAS server under Oracle Solaris Cluster management, defining resource dependencies, and editing the SAP profile) are required to finalize the configuration.

     

    Step 1. Create the remote database user.

    The current database is a copy of a database that was already configured for use with Oracle Solaris Cluster, so the user hauser already exists. It is possible that the requirements for hauser will evolve with newer versions of Oracle Solaris Cluster, so it is a good practice to check the Oracle Solaris Cluster documentation for the installed version to ensure that this user is created correctly.

     

    This is an example of how to create user hauser:

     

    oracle@sapm7zdbadm1c1:~$ sqlplus / as sysdba
    SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 9 02:19:09 2017
    Copyright (c) 1982, 2014, Oracle.  All rights reserved.
     
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Advanced Analytics and Real Application Testing options
     
    SQL> create user hauser identified by hauser;
    SQL> grant create session to hauser;
    SQL> grant execute on dbms_lock to hauser;
    SQL> grant select on v_$instance to hauser;
    SQL> grant select on v_$sysstat to hauser;
    SQL> grant select on v_$database to hauser;
    SQL> create profile hauser limit PASSWORD_LIFE_TIME UNLIMITED;
    SQL> alter user hauser identified by hauser profile hauser;
    SQL> exit

    Listing 38: Creating the user hauser.

     

    Step 2. Set up the secure remote database password.

     

    In each zone of the zone cluster, run the following commands to create an encrypted file containing the database password for user hauser:

     

    root@em7pr1-haapps-01:~# dd if=/dev/urandom of=/var/cluster/scoep_key bs=8 count=1
    1+0 records in
    1+0 records out
    root@em7pr1-haapps-01:~# echo hauser | /usr/sfw/bin/openssl enc -aes128 -e -pass \
      > file:/var/cluster/scoep_key -out /opt/ORCLscoep/.oep-rs_passwd

    Listing 39: Creating the secure password for user hauser.

     

    Verify that the password can be decrypted.

     

    root@em7pr1-haapps-01:~# echo hauser | /usr/sfw/bin/openssl enc -aes128 -e -pass \
      > file:/var/cluster/scoep_key -out /opt/ORCLscoep/.oep-rs_passwd
    root@em7pr1-haapps-01:~# /usr/sfw/bin/openssl enc -aes128 -d -pass \
      >  file:/var/cluster/scoep_key -in /opt/ORCLscoep/.oep-rs_passwd
    hauser
    root@em7pr1-haapps-01:~# chmod 400 /var/cluster/scoep_key
    root@em7pr1-haapps-01:~# chmod 400 /opt/ORCLscoep/.oep-rs_passwd

    Listing 40: Verifying the password for user hauser.

     

    Step 3. Create the file tnsnames.ora.

     

    Create a tnsnames.ora file in /var/opt/oracle on each of the zone cluster nodes. Verify the content on each node of the pr1-haapps-zc zone cluster:

     

    # cat /var/opt/oracle/tnsnames.ora
    PR3 =
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS = (PROTOCOL = TCP)(HOST = sapm7zdb1c1-ib-vip) (PORT = 1521))
            (ADDRESS = (PROTOCOL = TCP)(HOST = sapm7zdb2c1-ib-vip) (PORT = 1521))
         )
        (CONNECT_DATA =

           (SERVER = DEDICATED ) (SERVICE_NAME = PR3)

        )

    Listing 41: Verifying the content of file /var/opt/oracle/tnsnames.ora on each node of the pr1-haapps-zc zone cluster.

     

    In the example deployment, three different migration methods were used to create three different databases (PR2, PR3, and PR4). Changing the value of SERVICE_NAME allows pointing the resource to a different database as needed. The rest of the configuration does not change.

     

    Step 4. Configure the remote Oracle Notification Service.

     

    Running the Oracle Notification Service on every database node reduces the time it takes for the ORCL.oracle_external_proxy resource type to connect to the database and determine the database state. To verify that the Oracle Notification Service is running on the database nodes, run the following command:

     

    oracle@sapm7zdbadm1c1:/oracle/PR2/121/dbs$ crsctl stat res ora.ons -t
    --------------------------------------------------------------------------------
    Name           Target  State        Server                   State details      
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.ons
                   ONLINE  ONLINE       sapm7zdbadm1c1           STABLE
                   ONLINE  ONLINE       sapm7zdbadm2c1           STABLE
    --------------------------------------------------------------------------------

    Listing 42: Verifying the Oracle Notification Service.

     

    Step 5. Create the Oracle External Proxy resource.

     

    Set up the Oracle External Proxy resource as shown in Listing 43. Note that the location of the tnsnames.ora file can be specified as a parameter to allow a different location to be chosen (such as /usr/sap/PR1/SYS/profile/oracle):

     

    root@em7pr1-haapps-01:~# clrt register -f /opt/ORCLscoep/etc/ORCL.oracle_external_proxy \
      ORCL.oracle_external_proxy
    root@em7pr1-haapps-01:~# clrg create -S oep-rg
    root@em7pr1-haapps-01:~# clrs create -g oep-rg \
      -t RCL.oracle_external_proxy \
      -p service_name=PR3 \
      -p ons_nodes=sapm7zdb1c1-ib-vip:6200,sapm7zdb2c1-ib-vip:6200 \
      -p dbuser=hauser \
      -p tns_admin=/var/opt/oracle \
      -d oep-rs
    root@em7pr1-haapps-01:~# clrs status oep-rs
     
    === Cluster Resources ===
     
    Resource Name    Node Name           State      Status Message
    -------------    ---------           -----      --------------
    oep-rs           em7pr1-haapps-02    Offline    Offline
                     em7pr1-haapps-01    Offline    Offline
     
     
    root@em7pr1-haapps-01:~# clrg online -eM oep-rg
    root@em7pr1-haapps-01:~# clrs status oep-rs
     
    === Cluster Resources ===
     
    Resource Name     Node Name           State     Status Message
    -------------     ---------           -----     --------------
    oep-rs            em7pr1-haapps-02    Online    Online - Service PR3 is UP [Instance is OPEN]
                      em7pr1-haapps-01    Online    Online - Service PR3 is UP [Instance is OPEN]

    Listing 43: Configuring the Oracle External Proxy resource.

     

    Step 6. Put the PAS server under Oracle Solaris Cluster management.

     

    The SAP Primary Application Server (PAS) connects to Oracle Database. Check the resource types:

     

    root@em7pr1-haapps-01:~# clrt list
    SUNW.LogicalHostname:5
    SUNW.SharedAddress:3
    SUNW.ScalMountPoint:4
    ORCL.oracle_external_proxy
    ORCL.sapstartsrv:2
    ORCL.sapcentr:2
    ORCL.saprepenq:2
    ORCL.saprepenq_preempt:2

    Listing 44: Listing the resource types available.

     

    Create the Oracle Database monitoring agent, which can be configured to monitor either a single-instance Oracle Database or an Oracle RAC instance (the commands can be run from any of the zone cluster nodes):

     

    root@em7pr1-haapps-01:~# clrt register ORCL.sapdia
    root@em7pr1-haapps-01:~# clrs create -d -g pas-rg -t ORCL.sapstartsrv \
      -p SID=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=00 \
      -p instance_name=DVEBMGS00 \
      -p host=im7pr1-pas-lh \
      -p child_mon_level=5 \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs \
      -p timeout_return=20 \
      pas-startsrv-rs
     
    ## PAS was installed using IB host im7pr1-pas-lh
    root@em7pr1-haapps-01:~# clrs create -d -g pas-rg -t ORCL.sapdia \
      -p SID=PR1 \
      -p sap_user=pr1adm \
      -p instance_number=00 \
      -p instance_name=DVEBMGS00 \
      -p host=im7pr1-pas-lh \  
      -p resource_dependencies=pas-startsrv-rs,scal-ORACLE-oracle-rs,oep-rs \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs \
      -p yellow=20 \
      pas-rs

    Listing 45: Configuring the PAS server for Oracle Solaris Cluster management.

     

    After configuring the PAS server, check the status of cluster resources:

     

    root@em7pr1-haapps-01:~# clrs status
     
    === Cluster Resources ===
     
    Resource Name                Node Name          State     Status Message
    -------------                ---------          -----     --------------
    pas-rs                       em7pr1-haapps-01   Offline   Offline
                                 em7pr1-haapps-02   Offline   Offline
     
    pas-startsrv-rs              em7pr1-haapps-01   Offline   Offline
                                 em7pr1-haapps-02   Offline   Offline
     
    im7pr1-pas-lh                em7pr1-haapps-01   Online    Online - LogicalHostname online.
                                 em7pr1-haapps-02   Offline   Offline - LogicalHostname offline.
     
    em7pr1-pas-lh                em7pr1-haapps-01   Online    Online - LogicalHostname online.
                                 em7pr1-haapps-02   Offline   Offline - LogicalHostname offline.
     
    scal-TRANS-trans-rs          em7pr1-haapps-01   Online    Online
                                 em7pr1-haapps-02   Online    Online
     
    scal-PR1-usr-sap-haapps-rs   em7pr1-haapps-01   Online    Online
                                 em7pr1-haapps-02   Online    Online
     
    scal-PR1-sapmnt-rs           em7pr1-haapps-01   Online    Online
                                 em7pr1-haapps-02   Online    Online
     
    scal-ORACLE-oracle-rs        em7pr1-haapps-01   Online    Online
                                 em7pr1-haapps-02   Online    Online
     
    oep-rs                       em7pr1-haapps-02   Online    Online - Service PR2 is UP 
    [Instance is OPEN]
                                 em7pr1-haapps-01   Online    Online - Service PR2 is UP 
    [Instance is OPEN]
     
    clrg set -p RG_affinities+=++scalmnt-rg pas-rg
     
    root@em7pr1-haapps-01:~# clrg show -p RG_affinities pas-rg
     
    === Resource Groups and Resources ===
     
    Resource Group:                                 pas-rg
      RG_affinities:                                   ++scalmnt-rg
     
    clrs enable +
     
    root@em7pr1-haapps-01:~# ps -eaf |grep sap |wc -l
          32

    Listing 46: Checking cluster resource status.

     

    To create resources for the Additional Application Servers (AAS) D01 and D02, follow the same procedure:

     

    root@em7pr1-haapps-01:~# clrs create -d -g d01-rg -t ORCL.sapstartsrv \
      -p SID=PR1 -p sap_user=pr1adm \
      -p instance_number=01 -p instance_name=D01 \
      -p host=im7pr1-d01-lh -p child_mon_level=5 \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs,\
    scal-ORACLE-oracle-rs -p timeout_return=20 d01-startsrv-rs
     
    root@em7pr1-haapps-01:~# clrs create -d -g d01-rg -t ORCL.sapdia \
      -p SID=PR1 -p sap_user=pr1adm \
      -p instance_number=01 -p instance_name=D01 \
      -p host=im7pr1-d01-lh -p resource_project_name=PR1 \
      -p resource_dependencies=d01-startsrv-rs,scal-ORACLE-oracle-rs,oep-rs \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs \
      -p yellow=20 d01-rs
     
    root@em7pr1-haapps-01:~# clrg set -p RG_affinities+=++scalmnt-rg d01-rg
     
    root@em7pr1-haapps-01:~# clrs enable +
     
    root@em7pr1-haapps-01:~# clrs create -d -g d02-rg -t ORCL.sapstartsrv \
      -p SID=PR1 -p sap_user=pr1adm \
      -p instance_number=02 -p instance_name=D02 \
      -p host=im7pr1-d02-lh -p child_mon_level=5 \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs,\
    scal-ORACLE-oracle-rs -p timeout_return=20 d02-startsrv-rs
     
    root@em7pr1-haapps-01:~# clrs create -d -g d02-rg -t ORCL.sapdia  \
      -p SID=PR1 -p sap_user=pr1adm \
      -p instance_number=02 -p instance_name=D02 \
      -p host=im7pr1-d02-lh -p resource_project_name=PR1 \
      -p resource_dependencies=d02-startsrv-rs,scal-ORACLE-oracle-rs,oep-rs \
      -p resource_dependencies_offline_restart=scal-PR1-usr-sap-haapps-rs,scal-PR1-sapmnt-rs \
      -p yellow=20 d02-rs
     
    root@em7pr1-haapps-01:~# clrg set -p RG_affinities+=++scalmnt-rg d02-rg
     
    root@em7pr1-haapps-01:~# clrs enable +

    Listing 47: Creating cluster resources for Additional Application Servers D01 and D02.

     

    Step 7. Configure the cross-zone resource dependencies.

     

    Next, it is necessary to configure resource dependencies across zone clusters:

     

    root@sapm7adm-haapp-0101:~# clzc list
    pr1-ascs-zc
    pr1-haapps-zc
     
    root@sapm7adm-haapp-0101:~# clrs set -Z pr1-haapps-zc \
      -p resource_dependencies+=pr1-ascs-zc:ascs-rs pas-rs
     
    root@sapm7adm-haapp-0101:~# clrs status -Z pr1-ascs-zc
     
    === Cluster Resources ===
     
    Resource Name           Node Name        State     Status Message
    -------------           ---------        -----     --------------
    preempter-rs            em7pr1-ascs-01   Offline   Offline
                            em7pr1-ascs-02   Online    Online - Service is online.
     
    ascs-rs                 em7pr1-ascs-01   Offline   Offline
                            em7pr1-ascs-02   Online    Online - Service is online.
     
    ascs-startsrv-rs        em7pr1-ascs-01   Offline   Offline
                            em7pr1-ascs-02   Online    Online - Service is online.
     
    im7pr1-ascs-lh          em7pr1-ascs-01   Offline   Offline - LogicalHostname offline.
                            em7pr1-ascs-02   Online    Online - LogicalHostname online.
     
    em7pr1-ascs-lh          em7pr1-ascs-01   Offline   Offline - LogicalHostname offline.
                            em7pr1-ascs-02   Online    Online - LogicalHostname online.
     
    ers-rs                  em7pr1-ascs-01   Online    Online - Service is online.
                            em7pr1-ascs-02   Offline   Offline
     
    ers-startsrv-rs         em7pr1-ascs-01   Online    Online - Service is online.
                            em7pr1-ascs-02   Offline   Offline
     
    im7pr1-ers-lh           em7pr1-ascs-01   Online    Online - LogicalHostname online.
                            em7pr1-ascs-02   Offline   Offline - LogicalHostname offline.
     
    em7pr1-ers-lh           em7pr1-ascs-01   Online    Online - LogicalHostname online.
                            em7pr1-ascs-02   Offline   Offline - LogicalHostname offline.
     
    scal-usr-sap-trans-rs   em7pr1-ascs-01   Online    Online
                            em7pr1-ascs-02   Online    Online
     
    scal-usr-sap-rs         em7pr1-ascs-01   Online    Online
                            em7pr1-ascs-02   Online    Online
     
    scal-sapmnt-PR1-rs      em7pr1-ascs-01   Online    Online
                            em7pr1-ascs-02   Online    Online
     
    root@sapm7adm-haapp-0101:~# clrs show -p resource_dependencies -Z pr1-haapps-zc pas-rs
     
    === Resources ===
     
    Resource:                                       pr1-haapps-zc:pas-rs
      Resource_dependencies:                           pas-startsrv-rs scal-ORACLE-oracle-rs 
    oep-rs pr1-ascs-zc:ascs-rs
     
      --- Standard and extension properties ---
    or inside zc
    clrs show -p resource_dependencies pas-rs
     
    ----
    em7pr1-ascs-01:pr1adm 2% lgtst name=PR1 -H im7pr1-lh -S 3600
    using trcfile: dev_lg
     
    list of reachable application servers
    -------------------------------------
    [im7pr1-pas-lh_PR1_00] [im7pr1-pas-lh] [192.168.28.166] [sapdp00] [3200] [DIA UPD BTC SPO 
    UP2 ICM ]

    Listing 48: Configuring and checking cross-zone resource dependencies.

     

    Step 8. Edit the SAP profile and grant cluster administration privilege to the SID administrator.

     

    Integrate Oracle Solaris Cluster management and SAP instance management. Add the three parameters in Listing 49 to the default profile (/sapmnt/<SID>/profile/DEFAULT.PFL of the SAP system) or to the instance profile files of each instance. Replace <SID> with the real SID. Note that the second parameter and its value should be entered on one line.

     

    #
    # SAP HA Script Connector
    #
    service/halib = /usr/sap/<SID>/SYS/exe/run/saphascriptco.so
    service/halib_cluster_connector = /opt/ORCLscsapnetw/saphacmd/bin/sap_orcl_cluster_connector
    service/halib_debug_level = 1

    Listing 49: SAP profile parameters.

     

    Grant the cluster administration privilege to the SID administrator on all the clustered zones, so that user <Sid>adm can run Oracle Solaris Cluster commands:

     

    # usermod -A solaris.cluster.admin pr1adm

    Listing 50: Granting cluster administration privilege to user pr1adm.

     

    Without performing this step, you would not be able to start or stop your SAP instance once you enable the halib directive in your instance profile.

     

    Restart the instances and the corresponding sapstartsrv process. Once you have completed this step, the startsap and stopsap commands will always use the Oracle Solaris Cluster methods.

     

    For up-to-date configuration and tuning recommendations, check the SAP-specific Oracle Solaris Cluster documentation (Oracle Solaris Cluster Data Service for SAP NetWeaver Guide).

     

    Final Thoughts

     

    Because SAP applications often support vital business functions, certain SAP servers, such as the ASCS/ERS and PAS servers, require advanced levels of availability. To deploy highly available SAP applications, it is necessary to implement critical SAP components and application servers using the zone clustering capabilities of Oracle Solaris Cluster. When these SAP components are properly installed and configured as managed resources under Oracle Solaris Cluster, as shown in this article, it is possible to eliminate single points of failure and enable the highest service levels.

     

    Migrating SAP applications from an existing platform can be a challenging project that is often time consuming and complex. Oracle and SAP engineers collaborated to perform and document an SAP migration to an Oracle engineered system (specifically, to Oracle SuperCluster M7). During this process, they composed a six-part article series that describes different database migration methods, outlining each step and providing best practices. In addition, the engineering team compiled step-by-step instructions for building a highly available SAP production environment on an Oracle engineered system, documenting these steps in an additional three-part article series. This article, the last in that three-part series, explained how to configure Oracle Solaris Cluster and install SAP components that require the most advanced levels of availability.

     

    See Also

     

    Refer to these resources for more information:

     

    Online Resources

     

     

    White Papers

     

     

    Documentation

     

     

    About the Authors

     

    Jan Brosowski is a principal sales consultant for Oracle Systems in Europe North. Located in Walldorf, Germany, he is responsible for developing customer-specific architectures and operating models for both SAP and Hyperion systems, accompanying the projects from the requirements specification process to going live. Brosowski holds a Master of Business and Engineering degree and has been working for over 15 years with SAP systems in different roles.

     

    Victor Galis is a master sales consultant, part of the global Oracle Solution Center organization. He supports customers and sales teams architecting SAP environments based on Oracle hardware and technology. He works with SAP Basis and DBA teams, systems and storage administrators, as well as business owners and executives. His role is to understand current environments, business requirements, and pain points as well as future growth and map them to SAP landscapes that meet both performance and high availability expectations. He has been involved with many SAP on Oracle SuperCluster customer environments as an architect and has provided deployment and go-live assistance. Galis is a SAP-certified consultant and Oracle Database administrator.

     

    Gia-Khanh Nguyen is an architect for Oracle Solaris Cluster. He contributed to the product requirement and design specifications for features supporting HA and DR enterprise solutions and developed demonstrations of key features.

     

    Pierre Reynes is a solution manager for Oracle Optimized Solution for SAP and Oracle Optimized Solution for PeopleSoft. He is responsible for driving the strategy and efforts to help raise customer and market awareness for Oracle Optimized Solutions in these areas. Reynes has over 25 years of experience in the computer and network industries.