Setting Up Highly Available SAP Systems on Oracle SuperCluster—Part 2: Deploying and Configuring Oracle Solaris Cluster for SAP on Oracle SuperCluster

Version 4

    by Jan Brosowski, Victor Galis, Gia-Khanh Nguyen, and Pierre Reynes

     

    This article is Part 2 of a three-part series that describes steps for setting up SAP on Oracle SuperCluster in a highly available configuration. This article focuses on procedures for deploying the Oracle Solaris Cluster software and using it to configure two zone clusters across two nodes.

     

    Table of Contents
    Introduction
    Preparing Cluster Interconnects and iSCSI LUN to Be Used as the Quorum Device
    Installing and Configuring the Oracle Solaris Cluster 4.3 Software
    Creating a Cluster Using the Oracle Solaris Cluster Manager BUI
    Migrating Zones from the Source Platform
    Creating Zone Clusters
    Final Thoughts
    See Also
    About the Authors

     

    Introduction

     

    This article is Part 2 of a three-part series that provides best practices and recommendations for setting up highly available SAP systems on Oracle engineered systems. A team of Oracle engineers and SAP experts used Oracle SuperCluster M7 in a sample deployment to compile and test the step-by-step procedures and recommendations provided in this article series.

     

    To achieve high availability (HA), it is necessary to put mission-critical SAP components under the control of Oracle Solaris Cluster, creating a cluster of two or more zones on Oracle SuperCluster. Oracle Solaris Cluster can then orchestrate failover or disaster recovery strategies while managing infrastructure resources such as database, network connectivity, and shared storage.

    OOS_Logo_small125.png
    in collaboration with
    OSC_Logo_small125.png
    Oracle Optimized Solutions provide tested and proven best practices for how to run software products on Oracle systems. Learn more.

     

    This article series ("Setting up Highly Available SAP Systems on Oracle SuperCluster") divides configuration tasks into three articles:

     

    • Part 1: Configuring Virtualization for SAP on Oracle SuperCluster. The first article describes the steps needed to prepare virtual environments on Oracle SuperCluster. These virtual environments are Oracle VM Server for SPARC logical domains (LDOMs), which are separate, isolated I/O domains that help improve application resiliency. Two pairs of I/O domains are created on the nodes (Figure 1): one domain pair for SAP components and application servers that require advanced HA, and a second pair for application servers that are not mission-critical.
    • Part 2: Deploying and Configuring Oracle Solaris Cluster for SAP on Oracle SuperCluster. Part 2 is this article, which describes the steps for installing the Oracle Solaris Cluster software and using it to configure two zone clusters across the two nodes shown in Figure 1. The first zone cluster is dedicated to the most-critical SAP components: the ABAP Central Services instance (ASCS) and Enqueue Replication Server instance (ERS). The second zone cluster is used for the SAP Primary Application Server (PAS) and any other mission-critical Additional Application Server (AAS).
    • Part 3: Installing Highly Available SAP with Oracle Solaris Cluster on Oracle SuperCluster. The third article describes the step-by-step procedures for installing SAP ABAP stack components in the zone clusters and configuring them for HA. Oracle Solaris Cluster implements the concept of logical hostnames that are used to monitor node availability and, if necessary, it can move an IP address (managed as a resource) transparently to other nodes.

     

    f1.png

    Figure 1. An HA configuration for SAP uses the zone clustering capabilities of Oracle Solaris Cluster.

     

    Preparing Cluster Interconnects and iSCSI LUN to Be Used as the Quorum Device

     

    This section contains step-by-step instructions to prepare Oracle SuperCluster shared storage and network resources for use by Oracle Solaris Cluster, which involves

     

    • Creating InfiniBand (IB) partition data links and IP interfaces for the cluster interconnects
    • Configuring the iSCSI LUN that is used as a quorum device

     

    Step 1. Creating the IB partition data links and IP interfaces for the cluster interconnects.

     

    Before Oracle Solaris Cluster is installed and configured, it is necessary to configure storage and network interconnects for the two-node cluster that the software will manage. Dedicated IB partitions are created to support the internode traffic for the Oracle Solaris Cluster interconnect.

     

    Log in to each HAAPP I/O domain (the first article in this series created the HAAPP domains as sapm7adm-haapp-0101 and sapm7adm-haapp-0201). Identify the InfiniBand (IB) links in the domain, and create the IB partition data links and IP interfaces for the two cluster interconnects, ic1 and ic2. Listing 1 and Listing 2 show the commands executed in the domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

     

    root@sapm7adm-haapp-0101:~# dladm show-ib
    LINK      HCAGUID        PORTGUID       PORT STATE   GWNAME       GWPORT   PKEYS
    net6      10E100014AC620 10E000654AC622 2    up      --           --       8503,8512,FFFF
    net5      10E100014AC620 10E000654AC621 1    up      --           --       8503,8511,FFFF
     
    root@sapm7adm-haapp-0101:~# dladm create-part -l net5 -P 8511 ic1
    root@sapm7adm-haapp-0101:~# dladm create-part -l net6 -P 8512 ic2
    root@sapm7adm-haapp-0101:~# dladm show-part
    LINK         PKEY  OVER         STATE    FLAGS
    sys-root0    8503  net5         up       f---
    sys-root1    8503  net6         up       f---
    stor_ipmp0_0 8503  net6         up       f---
    stor_ipmp0_1 8503  net5         up       f---
    ic1          8511  net5         unknown  ----
    ic2          8512  net6         unknown  ----
    root@sapm7adm-haapp-0101:~# ipadm create-ip ic1
    root@sapm7adm-haapp-0101:~# ipadm create-ip ic2

    Listing 1: Configuring the interconnects on domain sapm7adm-haapp-0101.

     

    root@sapm7adm-haapp-0201:~# dladm show-ib
    LINK      HCAGUID        PORTGUID       PORT STATE   GWNAME       GWPORT   PKEYS
    net5      10E100014AA7B0 10E000654AA7B1 1    up      --           --       8503,8511,FFFF
    net6      10E100014AA7B0 10E000654AA7B2 2    up      --           --       8503,8512,FFFF
    root@sapm7adm-haapp-0201:~# dladm create-part -l net5 -P 8511 ic1
    root@sapm7adm-haapp-0201:~# dladm create-part -l net6 -P 8512 ic2
    root@sapm7adm-haapp-0201:~# dladm show-part
    LINK         PKEY  OVER         STATE    FLAGS
    sys-root0    8503  net5         up       f---
    sys-root1    8503  net6         up       f---
    stor_ipmp0_0 8503  net6         up       f---
    stor_ipmp0_1 8503  net5         up       f---
    ic1          8511  net5         unknown  ----
    ic2          8512  net6         unknown  ----
    root@sapm7adm-haapp-0201:~# ipadm create-ip ic1
    root@sapm7adm-haapp-0201:~# ipadm create-ip ic2
    root@sapm7adm-haapp-0101:~# iscsiadm modify discovery -s enable
    root@sapm7adm-haapp-0201:~# iscsiadm modify discovery -s enable

    Listing 2: Configuring the interconnects in the domain sapm7adm-haapp-0201.

     

    Interfaces ic1 and ic2 are now prepared to be used as cluster interconnects using partitions 8511 and 8512. It is important to configure the interfaces to use the same partitions on both nodes. In this example, ic1 is on partition 8511 and ic2 is on partition 8512 on both nodes. The interfaces are configured on different ports connected to different IB switches, preventing the failure of a single switch from disabling both interconnects.

     

    Step 2. Configuring the iSCSI LUN to be used as the quorum device.

     

    A quorum device is implemented as an iSCSI LUN on the shared internal Oracle ZFS Storage Appliance. (Oracle Solaris Cluster uses the quorum device to prevent data corruption caused by a catastrophic situation, such as split brain or amnesia, that could otherwise result in data corruption.)

     

    Configuring the quorum device requires several substeps, as follows:

     

    • Identifying the iSCSI initiators
    • Creating a quorum iSCSI initiator group
    • Creating the quorum iSCSI target and target group
    • Creating a quorum project and an iSCSI LUN for the quorum device
    • Configuring the cluster nodes to see the quorum iSCSI LUN

     

    On Oracle SuperCluster, iSCSI LUNs are used as boot devices. The global zone is set up for accessing the iSCSI LUNs from the internal Oracle ZFS Storage Appliance.

     

    Identify the iSCSI initiator nodes used to boot each node. Listing 3 and Listing 4 show the commands executed in domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

     

    root@sapm7adm-haapp-0101:~# iscsiadm list initiator-node
    Initiator node name: iqn.1986-03.com.sun:boot.00144ff828d4
    Initiator node alias: -
            Login Parameters (Default/Configured):
                    Header Digest: NONE/-
                    Data Digest: NONE/-
                    Max Connections: 65535/-
            Authentication Type: NONE
            RADIUS Server: NONE
            RADIUS Access: disabled
            Tunable Parameters (Default/Configured):
                    Session Login Response Time: 60/-
                    Maximum Connection Retry Time: 180/240
                    Login Retry Time Interval: 60/-
            Configured Sessions: 1

    Listing 3: Identifying the initiator nodes on the first cluster node.

     

    Listing 4 shows the commands in the second domain, sapm7adm-haapp-0201.

     

    root@sapm7adm-haapp-0201:~# iscsiadm list initiator-node
    Initiator node name: iqn.1986-03.com.sun:boot.00144ff9a0f9
    Initiator node alias: -
            Login Parameters (Default/Configured):
                    Header Digest: NONE/-
                    Data Digest: NONE/-
                    Max Connections: 65535/-
            Authentication Type: NONE
            RADIUS Server: NONE
            RADIUS Access: disabled
            Tunable Parameters (Default/Configured):
                    Session Login Response Time: 60/-
                    Maximum Connection Retry Time: 180/240
                    Login Retry Time Interval: 60/-
            Configured Sessions: 1

    Listing 4: Identifying the initiator nodes on the second cluster node.

     

    Step 3. Configure the quorum device.

     

    Identify the Oracle ZFS Storage Appliance hostnames, log in, and list the iSCSI initiators. The hostnames for the Oracle ZFS Storage Appliance cluster heads in the example deployment are

     

    10.129.112.136 sapm7-h1-storadm
    10.129.112.137 sapm7-h2-storadm

    Log in to each cluster head host and list the iSCSI initiators (Listing 5).

     

    sapm7-h1-storadm:configuration san initiators iscsi> ls
    Initiators:
     
    NAME          ALIAS
    initiator-000 init_sc1cn1dom0
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.0010e0479e74
     
    initiator-001 init_sc1cn1dom1
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ff8faae
     
    initiator-002 init_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ff97c9b
     
    initiator-003 init_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ff828d4
     
    initiator-004 init_sc1cn2dom0
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.0010e0479e75
     
    initiator-005 init_sc1cn2dom1
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ffbf174
     
    initiator-006 init_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ffb3b6c
    initiator-007 init_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
                  |
                  +-> INITIATOR
                      iqn.1986-03.com.sun:boot.00144ff9a0f9
     
     
    Children:
                               groups => Manage groups

    Listing 5: Identifying the iSCSI initiators on each cluster node.

     

    Note that the iSCSI initiators used as the iSCSI LUNs for booting the HAAPP domain already exist. The commands in Listing 6 create the quorum iSCSI initiator group (QuorumGroup-haapp-01) that contains both initiators (because both nodes must be able to access the quorum LUN):

     

    sapm7-h1-storadm:configuration san initiators iscsi groups> create
    sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> ls
    Properties:
                              name = (unset)
                        initiators = (unset)
     
    sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> set name=QuorumGroup-haapp-01
                              name = QuorumGroup-haapp-01 (uncommitted)
    sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> set initiators=\
    iqn.1986-03.com.sun:boot.00144ff828d4,iqn.1986-03.com.sun:boot.00144ff9a0f9
                        initiators = iqn.1986-03.com.sun:boot.00144ff828d4,iqn.1986-03.com.sun:boot.00144ff9a0f9 (uncommitted)
    sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> commit
    sapm7-h1-storadm:configuration san initiators iscsi groups> ls
    Groups:
     
    GROUP     NAME
    group-000 QuorumGroup-haapp-01
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff9a0f9
                  iqn.1986-03.com.sun:boot.00144ff828d4
     
    group-001 initgrp_sc1cn1_service
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff8faae
                  iqn.1986-03.com.sun:boot.0010e0479e74
     
    group-002 initgrp_sc1cn1dom0
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.0010e0479e74
     
    group-003 initgrp_sc1cn1dom1
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff8faae
     
    group-004 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff97c9b
     
    group-005 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff828d4
     
    group-006 initgrp_sc1cn2_service
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ffbf174
                  iqn.1986-03.com.sun:boot.0010e0479e75
     
    group-007 initgrp_sc1cn2dom0
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.0010e0479e75
     
    group-008 initgrp_sc1cn2dom1
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ffbf174
     
    group-009 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ffb3b6c
     
    group-010 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
              |
              +-> INITIATORS
                  iqn.1986-03.com.sun:boot.00144ff9a0f9
     
    sapm7-h1-storadm:configuration san initiators iscsi groups> cd ../..
    sapm7-h1-storadm:configuration san initiators> cd ..

    Listing 6: Creating the quorum iSCSI initiator group.

     

    Next, create a quorum iSCSI target, which will subsequently be added to a target group. First, note that ipmp3 is the interface hosting the Oracle ZFS Storage Appliance traffic over an IB address for head 1 on the appliance. Create a quorum iSCSI target that uses that interface.

     

    sapm7-h1-storadm:configuration net interfaces> ls
    Interfaces:
     
    INTERFACE   STATE    CLASS LINKS       ADDRS                  LABEL
    ibpart1     up       ip    ibpart1     0.0.0.0/32             p8503_ibp0
    ibpart2     up       ip    ibpart2     0.0.0.0/32             p8503_ibp1
    ibpart3     offline  ip    ibpart3     0.0.0.0/32             p8503_ibp0
    ibpart4     offline  ip    ibpart4     0.0.0.0/32             p8503_ibp1
    ibpart5     up       ip    ibpart5     0.0.0.0/32             p8503_ibp0
    ibpart6     up       ip    ibpart6     0.0.0.0/32             p8503_ibp1
    ibpart7     offline  ip    ibpart7     0.0.0.0/32             p8503_ibp0
    ibpart8     offline  ip    ibpart8     0.0.0.0/32             p8503_ibp1
    igb0        up       ip    igb0        10.129.112.136/20      igb0
    igb2        up       ip    igb2        10.129.97.146/20       igb2
    ipmp1       up       ipmp  ibpart1     192.168.24.9/22        ipmp_versaboot1
                               ibpart2                            
    ipmp2       offline  ipmp  ibpart3     192.168.24.10/22       ipmp_versaboot2
                               ibpart4                            
    ipmp3       up       ipmp  ibpart5     192.168.28.1/22        ipmp_stor1
                               ibpart6                            
    ipmp4       offline  ipmp  ibpart7     192.168.28.2/22        ipmp_stor2
                               ibpart8                            
    vnic1       up       ip    vnic1       10.129.112.144/20      vnic1
    vnic2       offline  ip    vnic2       10.129.112.145/20      vnic2
     
    sapm7-h1-storadm:configuration san> targets iscsi
    sapm7-h1-storadm:configuration san targets iscsi> create
    sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set alias=QuorumTarget-haapp-01
                             alias = QuorumTarget-haapp-01 (uncommitted)
    sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set interfaces=ipmp3
                        interfaces = ipmp3 (uncommitted)
    sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> commit
    sapm7-h1-storadm:configuration san targets iscsi> show
    Targets:
     
    TARGET     ALIAS          
    target-000 QuorumTarget-haapp-01
               |
               +-> IQN
                   iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
     
    target-001 targ_sc1sn1_iodinstall
               |
               +-> IQN
                   iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
     
    target-002 targ_sc1sn1_ipmp1
               |
               +-> IQN
                   iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
     
    target-003 targ_sc1sn1_ipmp2
               |
               +-> IQN
                   iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3

    Listing 7: Creating the quorum iSCSI target.

     

    Using the target just created, define a quorum iSCSI target group (QuorumGroup-haapp-01), as shown in Listing 8.

     

    sapm7-h1-storadm:configuration san targets iscsi> groups
    sapm7-h1-storadm:configuration san targets iscsi groups> create
    sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set name=\
    QuorumGroup-haapp-01
                              name = QuorumGroup-haapp-01 (uncommitted)
    sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set targets=\
    iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
                           targets = iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f (uncommitted)
    sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> commit 
    sapm7-h1-storadm:configuration san targets iscsi groups> show
    Groups:
     
    GROUP     NAME
    group-000 QuorumGroup-haapp-01
              |
              +-> TARGETS
                  iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
    group-001 targgrp_sc1sn1_iodinstall
              |
              +-> TARGETS
                  iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
     
    group-002 targgrp_sc1sn1_ipmp1
              |
              +-> TARGETS
                  iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
     
    group-003 targgrp_sc1sn1_ipmp2
              |
              +-> TARGETS
                  iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3

    Listing 8: Creating a group for the quorum iSCSI target.

     

    Create a quorum project and an iSCSI LUN for the quorum device, as shown in Listing 9.

     

    sapm7-h1-storadm:configuration san targets iscsi groups> cd /
    sapm7-h1-storadm:> shares
    sapm7-h1-storadm:shares> ls
    Properties:
                              pool = supercluster1
     
    Projects:
                         IPS-repos
                          OSC-data
                         OSC-oeshm
                              OVMT
                           default
                        sc1-ldomfs
    Children:
                           encryption => Manage encryption keys
                          replication => Manage remote replication
                               schema => Define custom property schema
     
    sapm7-h1-storadm:shares> project QuorumProject
    sapm7-h1-storadm:shares QuorumProject (uncommitted)> commit
    sapm7-h1-storadm:shares> select QuorumProject
    sapm7-h1-storadm:shares QuorumProject> lun QuorumLUN-haapp-01
    sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set volsize=1G
                           volsize = 1G (uncommitted)
    sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set targetgroup=\
    QuorumGroup-haapp-01
                       targetgroup = QuorumGroup-haapp-01 (uncommitted)
    sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set initiatorgroup=\
    QuorumGroup-haapp-01
                    initiatorgroup = QuorumGroup-haapp-01 (uncommitted)
    sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set lunumber=0
                          lunumber = 0 (uncommitted)
    sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> commit
    sapm7-h1-storadm:shares QuorumProject> ls
    Properties:
                        aclinherit = restricted
                           aclmode = discard
                             atime = true
                          checksum = fletcher4
                       compression = off
                             dedup = false
                     compressratio = 100
                            copies = 1
                          creation = Fri Jan 22 2016 00:15:15 GMT+0000 (UTC)
                           logbias = latency
                        mountpoint = /export
                             quota = 0
                          readonly = false
                        recordsize = 128K
                       reservation = 0
                          rstchown = true
                    secondarycache = all
                            nbmand = false
                          sharesmb = off
                          sharenfs = on
                           snapdir = hidden
                             vscan = false
                  defaultuserquota = 0
                 defaultgroupquota = 0
                        encryption = off
                         snaplabel = 
                          sharedav = off
                          shareftp = off
                         sharesftp = off
                         sharetftp = off
                              pool = supercluster1
                    canonical_name = supercluster1/local/QuorumProject
                     default_group = other
               default_permissions = 700
                    default_sparse = false
                      default_user = nobody
              default_volblocksize = 8K
                   default_volsize = 0
                          exported = true
                         nodestroy = false
                      maxblocksize = 1M
                        space_data = 31K
                  space_unused_res = 0
           space_unused_res_shares = 0
                   space_snapshots = 0
                   space_available = 7.10T
                       space_total = 31K
                            origin = 
     
    Shares:
     
     
    LUNs:
     
    NAME                VOLSIZE ENCRYPTED     GUID
    QuorumLUN-haapp-01  1G     off           600144F09EF4EF20000056A1756A0015
     
    Children:
                               groups => View per-group usage and manage group
                                         quotas
                          replication => Manage remote replication
                            snapshots => Manage snapshots
                                users => View per-user usage and manage user quotas

    Listing 9: Creating a quorum project and an iSCSI LUN for the quorum device.

     

    Configure a statically configured iSCSI target and view the quorum LUN on each cluster node. Listing 10 and Listing 11 show the commands executed in domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

     

    root@sapm7adm-haapp-0101:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-\
    5ec2-6331-bbca-fa190035423f,192.168.28.1
    root@sapm7adm-haapp-0101:~# iscsiadm list static-config
    Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f,192.168.28.1:3260
    root@sapm7adm-haapp-0101:~# iscsiadm list target -S
    Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
            Alias: QuorumTarget-haapp-01
            TPGT: 2
            ISID: 4000002a0000
            Connections: 1
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
     
    Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
            Alias: targ_sc1sn1_ipmp1
            TPGT: 2
            ISID: 4000002a0001
            Connections: 1
            LUN: 1
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2
     
    Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
            Alias: targ_sc1sn1_ipmp1
            TPGT: 2
            ISID: 4000002a0000
            Connections: 1
            LUN: 1
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2

    Listing 10: Configuring one cluster node's iSCSI target to see the quorum LUN.

     

    root@sapm7adm-haapp-0201:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-\
    5ec2-6331-bbca-fa190035423f,192.168.28.1
    root@sapm7adm-haapp-0201:~# iscsiadm list static-config
    Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f,192.168.28.1:3260
    root@sapm7adm-haapp-0201:~# iscsiadm list target -S
    Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
            Alias: QuorumTarget-haapp-01
            TPGT: 2
            ISID: 4000002a0000
            Connections: 1
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
     
    Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
            Alias: targ_sc1sn1_ipmp2
            TPGT: 2
            ISID: 4000002a0001
            Connections: 1
            LUN: 2
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2
     
    Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
            Alias: targ_sc1sn1_ipmp2
            TPGT: 2
            ISID: 4000002a0000
            Connections: 1
            LUN: 2
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
            LUN: 0
                 Vendor:  SUN     
                 Product: Sun Storage 7000
                 OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2

    Listing 11: Configuring the other cluster node's iSCSI target to see the quorum LUN.

     

    The newly created iSCSI LUN now can be accessed from both nodes. Oracle Solaris Cluster creation will automatically recognize it, and use it as the default quorum device option.

     

    Installing and Configuring the Oracle Solaris Cluster 4.3 Software

     

    Step 1. Install the solaris-small-server package group on both nodes.

     

    Refer to the Oracle Solaris Cluster 4.3 Software Installation Guide for detailed information about installing the Oracle Solaris Cluster 4.3 software. The Oracle Solaris Cluster software requires at least the Oracle Solaris solaris-small-server package group. Because the I/O domains in this example were originally installed with the solaris-minimal-server package group, they require the installation of the solaris-small-server package group on both nodes.

     

    Listing 12 shows that the solaris-small-server package group is not installed on node 1, sapm7adm-haapp-0101.

     

    root@sapm7adm-haapp-0101:~# pkg info -r solaris-small-server
              Name: group/system/solaris-small-server
           Summary: Oracle Solaris Small Server
       Description: Provides a useful command-line Oracle Solaris environment
          Category: Meta Packages/Group Packages
             State: Not installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.3.1.0.5.0
    Packaging Date: Tue Oct 06 13:56:21 2015
              Size: 5.46 kB
              FMRI: pkg://solaris/group/system/solaris-small-server@0.5.11,5.11-0.175.3.1.0.5.0:20151006T135621Z

    Listing 12: The solaris-small-server package group is not installed on node 1.

     

    Repeat this step on the second node, sapm7adm-haapp-0201, to see if the solaris-small-server package group is already installed.

     

    Perform the steps in Listing 13 to install the Oracle Solaris solaris-small-server package group on node 1, sapm7adm-haapp-0101.

     

    root@sapm7adm-haapp-0101:~# pkg install --accept --be-name solaris-small solaris-small-server 
               Packages to install:  92
           Create boot environment: Yes
    Create backup boot environment:  No
     
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                              92/92   13209/13209  494.7/494.7    0B/s
     
    PHASE                                          ITEMS
    Installing new actions                   19090/19090
    Updating package state database                 Done 
    Updating package cache                           0/0 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Updating package cache                           2/2 
     
    A clone of install exists and has been updated and activated.
    On the next boot the Boot Environment solaris-small will be
    mounted on '/'.  Reboot when ready to switch to this updated BE.
     
    Updating package cache                           2/2 
    root@sapm7adm-haapp-0101:~# beadm list
    BE            Flags Mountpoint Space  Policy Created          
    --            ----- ---------- -----  ------ -------          
    install       N     /          484.0K static 2016-01-19 16:53 
    solaris-small R     -          4.72G  static 2016-01-21 16:35 

    Listing 13: Installing the solaris-small-server package group on node 1.

     

    Repeat the same steps to install the solaris-small-server package group on the second node, sapm7adm-haapp-0201.

     

    Step 2. Reboot both nodes.

     

    After installing the package group on both nodes, reboot the first node, sapm7adm-haapp-0101, to mount the solaris-small boot environment as /.

     

    root@sapm7adm-haapp-0101:~# reboot
    root@sapm7adm-haapp-0101:~# beadm list
    BE            Flags Mountpoint Space  Policy Created          
    --            ----- ---------- -----  ------ -------          
    install       -     -          88.06M static 2016-01-19 16:53 
    solaris-small NR    /          4.85G  static 2016-01-21 16:35 

    Listing 14: Rebooting Oracle Solaris on node 1.

     

    Repeat the same step to reboot the second node, sapm7adm-haapp-0201.

     

    Step 3. Install the Oracle Solaris Cluster software.

     

    It is recommended that you install the ha-cluster-full package group because it contains packages for all data services implemented by Oracle Solaris Cluster. If any other package group is installed (such as the ha-cluster-minimal package group), SAP-specific Oracle Solaris Cluster packages must be added manually before you can properly configure clustered resources. Check the Oracle Solaris Cluster 4.3 Software Installation Guide for complete information on package recommendations.

     

    Locate the repository for the Oracle Solaris Cluster software, configure the repository location for the ha-cluster publisher, and install the ha-cluster-full package group (Listing 15).

     

    root@sapm7adm-haapp-0101:~# pkg publisher
    PUBLISHER                   TYPE     STATUS P LOCATION
    solaris                     origin   online F file:///net/192.168.28.1/export/IPS-repos/solaris11/repo/
    exa-family                  origin   online F file:///net/192.168.28.1/export/IPS-repos/exafamily/repo/
     
    root@sapm7adm-haapp-0101:~# ls /net/192.168.28.1/export/IPS-repos/osc4/repo
    pkg5.repository  publisher
     
    root@sapm7adm-haapp-0101:~# pkg set-publisher -g file:///net/192.168.28.1/export/IPS-repos\
    /osc4/repo ha-cluster
     
    root@sapm7adm-haapp-0101:~# pkg info -r ha-cluster-full
              Name: ha-cluster/group-package/ha-cluster-full
           Summary: Oracle Solaris Cluster full installation group package
       Description: Oracle Solaris Cluster full installation group package
          Category: Meta Packages/Group Packages
             State: Not installed
         Publisher: ha-cluster
           Version: 4.3 (Oracle Solaris Cluster 4.3.0.24.0)
     Build Release: 5.11
            Branch: 0.24.0
    Packaging Date: Wed Aug 26 23:33:36 2015
              Size: 5.88 kB
              FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.3,5.11-0.24.0:20150826T233336Z
     
    root@sapm7adm-haapp-0101:~# pkg install --accept --be-name ha-cluster ha-cluster-full
               Packages to install:  96
           Create boot environment: Yes
    Create backup boot environment:  No
     
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                              96/96     7794/7794  324.6/324.6    0B/s
     
    PHASE                                          ITEMS
    Installing new actions                   11243/11243
    Updating package state database                 Done 
    Updating package cache                           0/0 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Updating package cache                           3/3 
     
    A clone of solaris-small exists and has been updated and activated.
    On the next boot the Boot Environment ha-cluster will be
    mounted on '/'.  Reboot when ready to switch to this updated BE.
     
    Updating package cache                           3/3 

    Listing 15: Installing Oracle Solaris Cluster on node 1.

     

    Repeat these steps to configure the ha-cluster publisher and install the Oracle Solaris Cluster package group ha-cluster-full on node 2.

     

    After installing Oracle Solaris Cluster on both nodes, reboot node 1 and destroy the backup boot environment ha-cluster-backup-1, as shown.

     

    root@sapm7adm-haapp-0101:~# reboot
    root@sapm7adm-haapp-0101:~# beadm list
    BE                  Flags Mountpoint Space   Policy Created          
    --                  ----- ---------- -----   ------ -------          
    ha-cluster          NR    /          6.60G   static 2016-01-21 16:47 
    ha-cluster-backup-1 -     -          123.45M static 2016-01-21 16:51 
    install             -     -          88.06M  static 2016-01-19 16:53 
    solaris-small       -     -          14.02M  static 2016-01-21 16:35 
    root@sapm7adm-haapp-0101:~# beadm destroy -F ha-cluster-backup-1

    Listing 16: Rebooting after the installation of Oracle Solaris Cluster on node 1.

     

    Repeat the same step to reboot and destroy the boot environment for node 2, sapm7adm-haapp-0201.

     

    Step 4. Prepare for cluster creation.

     

    The steps in Listing 17 set up necessary prerequisites on node 1 (sapm7adm-haapp-0101) before creating the cluster.

     

    root@sapm7adm-haapp-0101:~# svccfg -s svc:/network/rpc/bind setprop config/local_only = boolean: false
    root@sapm7adm-haapp-0101:~# svccfg -s svc:/network/rpc/bind listprop config/local_only
    config/local_only boolean     false
    root@sapm7adm-haapp-0101:~# netadm list -p ncp defaultfixed
    TYPE        PROFILE        STATE
    ncp         DefaultFixed   online

    Listing 17: Prerequisites for cluster creation on node 1.

     

    Repeat the steps in Listing 17 to configure prerequisites on node 2 (sapm7adm-haapp-0201).

     

    Step 5. Configure network access policies for the cluster.

     

    During the initial configuration of a new cluster, cluster configuration commands are issued by one system, called the control node. The control node issues the command to establish the new cluster and configures other specified systems as cluster nodes. The clauth command controls network access policies for machines configured as nodes of the new cluster. Before running clauth, add the directory /usr/cluster/bin to the default path for executables in the .profile file on node 1:

     

    export PATH=/usr/bin:/usr/sbin
    PATH=$PATH:/usr/cluster/bin
     
    ".profile" 27 lines, 596 characters written

    Listing 18: Adding the path to the cluster software executables.

     

    Configure the access policies on both cluster nodes. TCP wrappers for remote procedure call (RPC) must be disabled on all nodes of the cluster. The clauth command authorizes acceptance of commands from the control node, which is node 1 (sapm7adm-haapp-0101) in this deployment.

     

    root@sapm7adm-haapp-0101:~# svccfg -s rpc/bind listprop config/enable_tcpwrappers
    config/enable_tcpwrappers boolean     false
     
    root@sapm7adm-haapp-0201:~# svccfg -s rpc/bind listprop config/enable_tcpwrappers
    config/enable_tcpwrappers boolean     false
     
    root@sapm7adm-haapp-0201:~# PATH=$PATH:/usr/cluster/bin
    root@sapm7adm-haapp-0201:~# clauth enable -n sapm7adm-haapp-0101
     
    root@sapm7adm-haapp-0101:~# svcs svc:/network/rpc/scrinstd:default 
    STATE          STIME    FMRI
    disabled       16:51:36 svc:/network/rpc/scrinstd:default
    root@sapm7adm-haapp-0101:~# svcadm enable svc:/network/rpc/scrinstd:default
    root@sapm7adm-haapp-0101:~# svcs svc:/network/rpc/scrinstd:default 
    STATE          STIME    FMRI
    online         17:12:11 svc:/network/rpc/scrinstd:default
     
    root@sapm7adm-haapp-0201:~# svcs svc:/network/rpc/scrinstd:default
    STATE          STIME    FMRI
    online         17:10:06 svc:/network/rpc/scrinstd:default

    Listing 19: Configuring cluster access policies.

     

    Creating a Cluster Using the Oracle Solaris Cluster Manager BUI

     

    To finish the installation, create a cluster using Oracle Solaris Cluster Manager (Figure 2), a browser-based user interface (BUI) for the software. Connect to port 8998 on the first node (in this case, to https://sapm7adm-haapp-0101:8998/). Currently the BUI supports configuration tasks performed only by the user root.

     

    f2.png

    Figure 2. Connecting to the Oracle Solaris Cluster Manager BUI.

     

    The cluster creation wizard guides you through the process of creating an Oracle Solaris Cluster configuration. It gathers configuration details, displays the results of checks before installing, and then performs an Oracle Solaris Cluster installation. The same BUI is used for managing and monitoring the Oracle Solaris Cluster configuration after installation. When using the BUI to manage the configuration, the corresponding CLI commands are displayed as they are run on the nodes.

     

    The cluster creation wizard (Figure 3) first verifies prerequisites for cluster creation. Select Typical for the Creation Mode, which works well on Oracle SuperCluster for clustered SAP environments.

     

    f3.png

    Figure 3. The Oracle Solaris Cluster wizard simplifies the process of cluster creation.

     

    Select the cluster interconnects ic1 and ic2 (configured previously) as the local transport adapters (Figure 4).

     

    f4.png

    Figure 4. Specify the adapter interfaces for the Oracle Solaris Cluster configuration.

     

    Next, specify the cluster name and nodes for the cluster configuration (Figure 5) and the quorum device (Figure 6). When selecting a quorum device, Oracle Solaris Cluster can detect which disk is the only direct-attached shared disk. If more than one is present, it will ask you to make a choice.

     

    f5.png

    Figure 5. Specify the nodes for the Oracle Solaris Cluster configuration.

     

    f6.png

    Figure 6. Specify the quorum configuration for Oracle Solaris Cluster.

     

    Resource security information is displayed (Figure 7), and then the entire configuration is presented for review (Figure 8). At this point, Oracle Solaris Cluster is ready to create the cluster. If desired, select the option from the review screen to perform a cluster check before actual cluster creation.

     

    f7.png

    Figure 7. Resource security information.

     

    f8.png

    Figure 8. Review the Oracle Solaris Cluster configuration.

     

    Figure 9 shows the results of the cluster check. Review the configuration and click Back to make changes if needed. Click the Create button to begin the actual cluster creation. Figure 10 shows the output of the cluster creation steps for the cluster sapm7-haapp-01. This step will result in the domain sapm7-haapp-01 being rebooted as a cluster node.

     

    f9.png

    Figure 9. Cluster check report.

     

    f10.png

    Figure 10. Results of the cluster creation.

     

    Click Finish to initiate the configuration of the remaining node, which will reboot as a cluster node; at this point, it will join the other node and form the cluster.

     

    The nodes are each rebooted to join the cluster. After the reboot, log in again to the BUI to view cluster status. Figure 11 shows status information for the created cluster sapm7-haapp-01.

     

    f11.png

    Figure 11. Oracle Solaris Cluster Manager provides status information about the created cluster.

     

    At this point, there are no resource groups or zone clusters. More-detailed information is available using the menu options. For example, by selecting Nodes, you can drill down to see status information for each node (Figure 12). By selecting Quorum, you can also see status information for the quorum device and nodes (Figure 13).

     

    f12.png

    Figure 12. The interface can present detailed status information about cluster nodes.

     

    f13.png

    Figure 13. Quorum device information is also available.

     

    Migrating Zones from the Source Platform

     

    This series of articles focuses on migrating a complete SAP system to an Oracle engineered system, specifically to an Oracle SuperCluster M7. After the Oracle Solaris Cluster software is installed and the cluster has been created, zones from the source platform (in this example, a cluster of Oracle's SPARC T5-8 server nodes connected to an Oracle ZFS Storage Appliance) can be migrated to the destination platform, if desired.

     

    The zone clusters in the source environment have the operating system (OS) configuration for the SAP system, including user accounts, home directories, SAP system resource management settings, and so forth. To minimize the risk of omitting configuration settings that are necessary for the destination SAP system to be operational, Unified Archives—a feature of the Oracle Solaris 11 operating system—can capture the source OS environment with fidelity. Because Unified Archives allow multiple system instances to be archived in a single unified file format, they provide a relatively easy method of migrating zones from an existing system to a new platform or virtual machine.

     

    Step 1. Create Unified Archives of zone cluster nodes on the source system.

     

    On the Oracle ZFS Storage Appliance for the source system, create Unified Archives for the zone cluster nodes (for both source system nodes, sapt58-haapp-01 and sapt58-haapp-02), as shown in Listing 20. Store them in a utility directory (/export/software/sap) mounted on the source system's storage appliance:

     

    root@sapt58-haapp-01:~# clzc halt pr1-haapps-zc
    Waiting for zone halt commands to complete on all the nodes of the zone cluster "pr1-haapps-zc"...
    root@sapt58-haapp-01:~# clzc halt pr1-ascs-zc
    Waiting for zone halt commands to complete on all the nodes of the zone cluster "pr1-ascs-zc"...
     
    root@sapt58-haapp-01:~# archiveadm create -z pr1-ascs-zc -e /export/software/sap/epr1-ascs-01.uar
    Initializing Unified Archive creation resources...
    Unified Archive initialized: /export/software/sap/epr1-ascs-01.uar
    Logging to: /system/volatile/archive_log.119
    Executing dataset discovery...
    Dataset discovery complete
    Preparing archive system image...
    Beginning archive stream creation...
    Archive stream creation complete
    Beginning final archive assembly...
    Archive creation complete
     
    root@sapt58-haapp-02:~# time archiveadm create -z pr1-ascs-zc -e /export/software/sap/epr1-ascs-02.uar
    Initializing Unified Archive creation resources...
    Unified Archive initialized: /export/software/sap/epr1-ascs-02.uar
    Logging to: /system/volatile/archive_log.12311
    Executing dataset discovery...
    Dataset discovery complete
    Preparing archive system image...
    Beginning archive stream creation...
    Archive stream creation complete
    Beginning final archive assembly...
    Archive creation complete
     
    real    41m14.031s
    user    36m34.077s
    sys     4m13.557s
     
    root@sapt58-haapp-01:~# archiveadm create -z pr1-haapps-zc -e /export/software/sap/epr1-haapps-01.uar
    Initializing Unified Archive creation resources...
    Unified Archive initialized: /export/software/sap/epr1-haapps-01.uar
    Logging to: /system/volatile/archive_log.8684
    Executing dataset discovery...
    Dataset discovery complete
    Preparing archive system image...
    Beginning archive stream creation...
    Archive stream creation complete
    Beginning final archive assembly...
    Archive creation complete
     
    root@sapt58-haapp-02:~# archiveadm create -z pr1-haapps-zc -e /export/software/sap/epr1-haapps-02.uar
    Initializing Unified Archive creation resources...
    Unified Archive initialized: /export/software/sap/epr1-haapps-02.uar
    Logging to: /system/volatile/archive_log.14246
    Executing dataset discovery...
    Dataset discovery complete
    Preparing archive system image...
    Beginning archive stream creation...
    Archive stream creation complete
    Beginning final archive assembly...
    Archive creation complete
     
    root@sapt58-haapp-01:~# ls -lh /export/software/sap/*.uar
    -rw-r--r--   1 root     root        1.5G Mar  6 20:50 /export/software/sap/epr1-ascs-01.uar
    -rw-r--r--   1 root     root        595M Mar  6 21:46 /export/software/sap/epr1-ascs-02.uar
    -rw-r--r--   1 root     root         13G Mar  6 22:05 /export/software/sap/epr1-haapps-01.uar
    -rw-r--r--   1 root     root        630M Mar  6 23:40 /export/software/sap/epr1-haapps-02.uar
     
    root@sapt58-haapp-01:~# df -h /export/software/sap/
    Filesystem             Size   Used  Available Capacity  Mounted on
    izfssa-02:/export/sapt58/SOFTWARE/software
                           305G    75G       230G    25%    /export/software

    Listing 20: Creating Unified Archives of zones on both nodes of the source system.

     

    The Unified Archives are created in /export/software/sap on the source system. This utility directory is NFS-mounted from the Oracle ZFS Storage Appliance used to deploy SAP on the source system's nodes. The appliance holds the Oracle Database files, SAP shared file systems, and a utility directory containing SAP binaries, scripts, installation logs, and so forth.

     

    Step 2. Copy over the Unified Archive files to the destination system.

     

    To optimize migration activities, the appliance share mounted as /export/software/sap on the source system is replicated to the internal Oracle ZFS Storage Appliance in the Oracle SuperCluster M7 (the destination system), as shown in Figure 14. For detailed information on how to manage replication between two Oracle ZFS Storage Appliances, see "Configuring Project Replication" in the Oracle ZFS Storage Appliance Administration Guide.

     

    f14.png

    Figure 14. Replication of the SOFTWARE project in progress on the Oracle ZFS Storage Appliance.

     

    The replicated share is mounted read-only on the destination system as /export/software/t58software. Shortly after Unified Archive files are created, they are available on the internal appliance on the destination Oracle SuperCluster M7 system (Listing 21).

     

    root@sapm7adm-haapp-0101:~# ls -lh /export/software/t58software/sap/*.uar
    -rw-r--r--   1 root     root        1.5G Mar  6  2016 /export/software/t58software/sap/epr1-ascs-01.uar
    -rw-r--r--   1 root     root        595M Mar  6  2016 /export/software/t58software/sap/epr1-ascs-02.uar
    -rw-r--r--   1 root     root         13G Mar  6  2016 /export/software/t58software/sap/epr1-haapps-01.uar
    -rw-r--r--   1 root     root        630M Mar  6  2016 /export/software/t58software/sap/epr1-haapps-02.uar

    Listing 21: Viewing the Unified Archive files on the destination system.

     

    An alternate approach is to simply copy the Unified Archive files from the source working directory to the target working directory (Listing 22).

     

    root@sapm7adm-haapp-0101:~# cp -p /net/sapzs-gui-02/export/sapt58/SOFTWARE/software/sap/*.uar /net/192.168.28.2/export/SOFTWARE/software/
     
    root@sapm7adm-haapp-0101:~# ls -lh /net/192.168.28.2/export/SOFTWARE/software/ total 32564500
    drwxrwxrwx   4 nobody   nobody        10 Mar  7 00:18 SAP-BP
    -rw-r--r--   1 nobody   nobody      1.5G Mar  6 20:50 epr1-ascs-01.uar
    -rw-r--r--   1 nobody   nobody      595M Mar  6 21:46 epr1-ascs-02.uar
    -rw-r--r--   1 nobody   nobody       13G Mar  6 22:05 epr1-haapps-01.uar
    -rw-r--r--   1 nobody   nobody      630M Mar  6 23:40 epr1-haapps-02.uar

    Listing 22: Copying Unified Archive files.

     

    The Unified Archive files will be used later in the final steps of creating the zone cluster.

     

    Step 3. Create a new boot environment with the solaris-large-server package group for the destination zones.

     

    Both nodes of the source cluster, and hence its zone clusters, were initially installed with the solaris-large-server package group, as shown in Listing 23 and Listing 24. For this reason, it is necessary to install the solaris-large-server package group in a new boot environment for the HAAPP domains on both nodes on the destination system, as shown in Listing 25 and Listing 26.

     

    root@sapt58-haapp-01:~# pkg info solaris-large-server
              Name: group/system/solaris-large-server
           Summary: Oracle Solaris Large Server
       Description: Provides an Oracle Solaris large server environment
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.2.12.0.3.0
    Packaging Date: June 22, 2015 04:49:28 PM 
              Size: 5.46 kB
              FMRI: pkg://solaris/group/system/solaris-large-server@0.5.11,5.11-0.175.2.12.0.3.0:20150622T164928Z
    root@sapt58-haapp-01:~# beadm list
    BE                            Flags Mountpoint Space   Policy Created          
    --                            ----- ---------- -----   ------ -------          
    after_SC-w-Desktop            -     -          1.00G   static 2015-02-20 15:19 
    before_Cluster                -     -          201.0K  static 2015-02-11 02:42 
    before_SC                     -     -          127.51M static 2015-02-09 16:43 
    before_SC-1                   -     -          1.02G   static 2015-02-11 12:45 
    before_SC-PKG                 -     -          69.0K   static 2015-02-11 02:37 
    before_SC-backup-1            -     -          69.0K   static 2015-02-11 02:38 
    before_SC-backup-2            -     -          233.0K  static 2015-02-20 15:12 
    pr1_b4sap                     -     -          116.0K  static 2015-07-28 17:49 
    pr1_fixed                     -     -          1.48G   static 2015-07-06 15:59 
    s11.2.13.6_sc4.2.5.1          NR    /          34.63G  static 2015-08-14 19:14 
    s11.2.13.6_sc4.2.5.1-backup-1 -     -          132.13M static 2015-08-14 20:41 
    solaris                       -     -          7.08M   static 2015-02-02 10:09 
    solaris-1                     -     -          3.95G   static 2015-02-09 11:08 
    solaris-1-backup-1            -     -          187.0K  static 2015-02-09 16:44 
    root@sapt58-haapp-01:~# beadm mount solaris /mnt
    root@sapt58-haapp-01:~# pkg -R /mnt info solaris-large-server
              Name: group/system/solaris-large-server
           Summary: Oracle Solaris Large Server
       Description: Provides an Oracle Solaris large server environment
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.2.0.0.42.0
    Packaging Date: June 23, 2014 09:49:37 PM 
              Size: 5.46 kB
              FMRI: pkg://solaris/group/system/solaris-large-server@0.5.11,5.11-0.175.2.0.0.42.0:20140623T214937Z

    Listing 23: Node 1 on the source platform has the solaris-large-server package group installed.

     

    root@sapt58-haapp-02:~# pkg info solaris-large-server
              Name: group/system/solaris-large-server
           Summary: Oracle Solaris Large Server
       Description: Provides an Oracle Solaris large server environment
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.2.12.0.3.0
    Packaging Date: June 22, 2015 04:49:28 PM 
              Size: 5.46 kB
              FMRI: pkg://solaris/group/system/solaris-large-server@0.5.11,5.11-0.175.2.12.0.3.0:20150622T164928Z

    Listing 24: Node 2 on the source platform has the solaris-large-server package group installed.

     

    root@sapm7adm-haapp-0101:~# beadm list
    BE            Flags Mountpoint Space  Policy Created          
    --            ----- ---------- -----  ------ -------          
    ha-cluster    NR    /          11.29G static 2016-01-21 16:47 
    install       -     -          88.06M static 2016-01-19 16:53 
    solaris-small -     -          14.02M static 2016-01-21 16:35 
     
    root@sapm7adm-haapp-0101:~# pkg install --be-name ha-large solaris-large-server
               Packages to install: 103
           Create boot environment: Yes
    Create backup boot environment:  No
     
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                            103/103   19813/19813  149.0/149.0    0B/s
     
    PHASE                                          ITEMS
    Installing new actions                   25759/25759
    Updating package state database                 Done 
    Updating package cache                           0/0 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Updating package cache                           3/3 
     
    A clone of ha-cluster exists and has been updated and activated.
    On the next boot the Boot Environment ha-large will be
    mounted on '/'.  Reboot when ready to switch to this updated BE.
     
    Updating package cache                           3/3 

    Listing 25: Installing the solaris-large-server package group in a new boot environment on node 1 of the destination platform.

     

    root@sapm7adm-haapp-0201:~# beadm list
    BE            Flags Mountpoint Space  Policy Created          
    --            ----- ---------- -----  ------ -------          
    ha-cluster    NR    /          11.45G static 2016-01-21 16:48 
    install       -     -          88.07M static 2016-01-19 17:14 
    solaris-small -     -          14.04M static 2016-01-21 16:35 
     
    root@sapm7adm-haapp-0201:~# pkg install --be-name ha-large solaris-large-server
               Packages to install: 103
           Create boot environment: Yes
    Create backup boot environment:  No
     
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                            103/103   19813/19813  149.0/149.0    0B/s
     
    PHASE                                          ITEMS
    Installing new actions                   25759/25759
    Updating package state database                 Done 
    Updating package cache                           0/0 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Updating package cache                           3/3 
     
    A clone of ha-cluster exists and has been updated and activated.
    On the next boot the Boot Environment ha-large will be
    mounted on '/'.  Reboot when ready to switch to this updated BE.
     
    Updating package cache                           3/3 

    Listing 26: Installing the solaris-large-server package group in a new boot environment on node 2 of the destination platform.

     

    Step 4. Reboot the destination cluster nodes using the new boot environments.

     

    On the destination platform, reboot the cluster nodes using the new boot environments:

     

    root@sapm7adm-haapp-0101:~# reboot
    Mar  9 09:48:39 sapm7adm-haapp-0101 reboot: initiated by root on /dev/console
    root@sapm7adm-haapp-0101:~# beadm list
    BE            Flags Mountpoint Space   Policy Created          
    --            ----- ---------- -----   ------ -------          
    ha-cluster    -     -          141.49M static 2016-01-21 16:47 
    ha-large      NR    /          13.62G  static 2016-03-09 09:44 
    install       -     -          88.06M  static 2016-01-19 16:53 
    solaris-small -     -          14.02M  static 2016-01-21 16:35 
     
    root@sapm7adm-haapp-0201:~# reboot
    Mar  9 09:48:39 sapm7adm-haapp-0201 reboot: initiated by root on /dev/console
    root@sapm7adm-haapp-0201:~# beadm list
    BE            Flags Mountpoint Space   Policy Created          
    --            ----- ---------- -----   ------ -------          
    ha-cluster    -     -          136.26M static 2016-01-21 16:48 
    ha-large      NR    /          13.78G  static 2016-03-09 09:44 
    install       -     -          88.07M  static 2016-01-19 17:14 
    solaris-small -     -          14.04M  static 2016-01-21 16:35 

    Listing 27: Rebooting both destination nodes to use the new boot environment.

     

    Step 5. Add the solaris-desktop package to the HAAPP domain on the first node.

     

    Because the HAAPP domain on node 1 of the source system was installed with the solaris-desktop package group, it is necessary to add that package group to the HAAPP domain on the first node of the destination system:

     

    root@sapt58-haapp-01:~# pkg info group/system/solaris-desktop
              Name: group/system/solaris-desktop
           Summary: Oracle Solaris Desktop
       Description: Provides an Oracle Solaris desktop environment
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.2.12.0.3.0
    Packaging Date: June 22, 2015 04:49:27 PM 
              Size: 5.46 kB
              FMRI: pkg://solaris/group/system/solaris-desktop@0.5.11,5.11-0.175.2.12.0.3.0:20150622T164927Z

    Listing 28: Node 1 on the source platform has the solaris-desktop package group installed.

     

    root@sapm7adm-haapp-0101:~# pkg install group/system/solaris-desktop
               Packages to install: 344
                Services to change:  13
           Create boot environment:  No
    Create backup boot environment: Yes
     
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                            344/344   57782/57782  632.2/632.2    0B/s
     
    PHASE                                          ITEMS
    Installing new actions                   97743/97743
    Updating package state database                 Done
    Updating package cache                           0/0
    Updating image state                            Done
    Creating fast lookup database                   Done
    Updating package cache                           3/3

    Listing 29: Adding the solaris-desktop package to the HAAPP domain on node 1.

     

    Creating Zone Clusters

     

    Zone clusters can be created by using the clzonecluster or clzc command, or by using the BUI provided with Oracle Solaris Cluster. This section gives an example of using the Oracle Solaris Cluster Manager BUI to implement zone clustering.

     

    Step 1. Use the cluster BUI to create a new zone cluster, pr1-ascs-zc.

     

    Use a browser to access the Oracle Solaris Cluster Manager via the URL https://sapm7adm-haapp-0101:8998/scm, and log in to the cluster BUI as root. Under Tasks, select Zone Clusters. Select Create to start the zone cluster creation wizard, and perform the steps shown in Figure 15 through Figure 23, which show the creation of the first zone cluster, pr1-ascs-zc. The two zone cluster nodes created in this zone cluster are zones named em7pr1-ascs-01 and em7pr1-ascs-02.

     

    f15.png

    Figure 15. Click Create to start the zone cluster creation wizard.

     

    f16.png

    Figure 16. Define zone cluster creation properties.

     

    f17.png

    Figure 17. Optionally define resource controls for Oracle Solaris system resources.

     

    f18.png

    Figure 18. Optionally define resource controls for CPU and memory.

     

    f19.png

    Figure 19. Select the physical hosts that comprise the zone cluster nodes.

     

    f20.png

    Figure 20. Enter zone cluster settings, including zone hostname, network address, network interface, and default router address.

     

    f21.png

    Figure 21. Review settings and click Create Zone Cluster.

     

    f22.png

    Figure 22. The wizard displays the executed CLI commands.

     

    f23.png

    Figure 23. The zone cluster pr1-ascs-zc is successfully created.

     

    Step 2. Add the IB network to the zone cluster configuration

     

    At the time of this publication, the zone cluster creation capability in the BUI supports only the creation of a single network. On Oracle SuperCluster for SAP installations, it is recommended to use the internal IB network to connect from APP servers to the database, and to mount file systems from the internal Oracle ZFS Storage Appliance. IB connections reduce security risks because Oracle Database domains do not need to be connected to the external 10GbE network, and they offer better throughput when running backups or any other data transfers.

     

    Make sure that the following IB host entries exist in the /etc/hosts files on both nodes:

     

    192.168.28.152   im7pr1-ascs-01
    192.168.28.153   im7pr1-ascs-02

    Listing 30: IB entries should exist in /etc/hosts on both nodes.

     

    From the command line, add the second network to the zone cluster:

     

    root@sapm7adm-haapp-0101:~# cat /var/tmp/pr1-ascs-ibnet.conf
    select node hostname=em7pr1-ascs-01
    add net
    set address=192.168.28.152/20
    set physical=stor_ipmp0
    end
    end
    select node hostname=em7pr1-ascs-02
    add net
    set address=192.168.28.153/20
    set physical=stor_ipmp0
    end
    end
    commit
    root@sapm7adm-haapp-0101:~# clzc configure -f /var/tmp/pr1-ascs-ibnet.conf pr1-ascs-zc

    Listing 31: Configure the second IB network for the zone cluster.

     

    Step 3. Create a system configuration profile.

     

    The next step is to create profiles to be used for creating the zone cluster nodes in each zone cluster. These virtual nodes are individual Oracle Solaris Zones, which are lightweight virtual hosts that provide application isolation. The sysconfig utility creates an initial system configuration profile that will be used to configure zones in the zone cluster:

     

    root@sapm7adm-haapp-0201:~# sysconfig create-profile -o /net/sapm7-h2-storIB/export\
    /SOFTWARE/software/util/zc/sc_profile -g location,identity,naming_service,users

    Listing 32: Running sysconfig to create a system configuration profile.

     

    By navigating through all the sysconfig screens (starting with Figure 24), you can create a system configuration profile. Running the sysconfig utility to create a profile resembles an interactive configuration of the Oracle Solaris operating system.

     

    f24.png

    Figure 24. The sysconfig utility creates a system configuration profile.

     

    The summary screen (Figure 25) shows the settings for the zone em7pr1-ascs-01, a node in the pr1-ascs-zc zone cluster.

     

    f25.png

    Figure 25. The sysconfig review screen shows profile settings for the zone.

     

    A message appears when the sysconfig utility finishes and the profile is created. The profile can then be used to install zones in the new zone cluster.

     

    SC profile successfully generated as:
    /net/sapm7-h2-storIB/export/SOFTWARE/software/util/zc/sysconfig_profile/sc_profile.xml
     
    Exiting System Configuration Tool. Log is available at:
    /system/volatile/sysconfig/sysconfig.log.2449
     
    root@sapm7adm-haapp-0201:~# ls /net/sapm7-h2-storIB/export/SOFTWARE/software/util/zc/sysconfig_profile/
    sc_profile.xml

    Listing 33: Successful generation of a system configuration profile.

     

    Step 4. Install each node of the pr1-ascs-zc zone cluster.

     

    As explained previously, to copy OS and SAP administrator user configurations with fidelity from the SAP source system, you can use Unified Archives to migrate zones from the source system and construct zone cluster nodes on the destination system. Steps in the earlier section, "Migrating Zones from the Source Platform," created Unified Archive files that captured the zones that make up the zone cluster on the source system. These files can now be used to migrate zone cluster nodes from the source platform to the destination system.

     

    Using the corresponding Unified Archive file, install each node of the destination system's pr1-ascs-zc zone cluster.

     

    root@sapm7adm-haapp-0101:~# clzc install -c /net/sapm7-h2-storIB/export/SOFTWARE\
    /software/util/zc/sysconfig_profile/sc_profile.xml -n sapm7adm-haapp-0101 -a /net\
    /192.168.28.2/export/SOFTWARE/software/epr1-ascs-01.uar pr1-ascs-zc

    Listing 34: Installing node 1 of the pr1-ascs-zc zone cluster using a Unified Archive file.

     

    root@sapm7adm-haapp-0201:~# clzc install -c /net/sapm7-h2-storIB/export/SOFTWARE\
    /software/util/zc/sysconfig_profile/sc_profile.xml -n sapm7adm-haapp-0201 -a /net\
    /192.168.28.2/export/SOFTWARE/software/epr1-ascs-02.uar pr1-ascs-zc

    Listing 35: Installing node 2 of the pr1-ascs-zc zone cluster using a Unified Archive file.

     

    For a fresh installation (one that is not using a pre-existing Unified Archive file), run the single command in Listing 36 instead.

     

    root@sapm7adm-haapp-0101:~# clzc install -c /net/sapm7-h2-storIB/export/SOFTWARE\
    /software/util/zc/sysconfig_profile/sc_profile.xml pr1-ascs-zc

    Listing 36: Zone cluster fresh installation.

     

    Step 5. Boot the pr1-ascs-zc zone cluster and verify node status using the cluster BUI.

     

    In two separate terminal windows, connect to the console for each node in the zone cluster.

     

    root@sapm7adm-haapp-0101:~# zlogin -C pr1-ascs-zc
    [Connected to zone 'pr1-ascs-zc' console]

    Listing 37: Connecting to the console of node 1 in the pr1-ascs-zc zone cluster.

     

    root@sapm7adm-haapp-0201:~# zlogin -C pr1-ascs-zc
    [Connected to zone 'pr1-ascs-zc' console]

    Listing 38: Connecting to the console of node 2 in the pr1-ascs-zc zone cluster.

     

    Perform the steps shown in Figure 26 and Figure 27 to boot the zone cluster nodes. Watch the console messages to confirm that the zone cluster nodes boot up successfully. Figure 28 shows that, after booting, the zone cluster nodes (em7pr1-ascs-01 and em7pr1-ascs-02) are online and running.

     

    f26.png

    Figure 26. Click Boot to boot each node in the zone cluster.

     

    f27.png

    Figure 27. Confirm the boot.

     

    f28.png

    Figure 28. Confirm that the zone cluster nodes are online and running.

     

    Step 6. Log in to the zone cluster nodes and verify their configurations.

     

    Log in to the zone cluster nodes to verify the installed operating system and Oracle Solaris Cluster versions, as well as the networking settings. Listing 39 and Listing 40 show the commands executed on node 1 and node 2, respectively, to display the software versions for the nodes in the pr1-ascs-zc zone cluster.

     

    root@sapm7adm-haapp-0101:~# zlogin pr1-ascs-zc
    [Connected to zone 'pr1-ascs-zc' pts/2]
    Last login: Tue Mar  1 20:26:02 2016 on pts/3
    Oracle Corporation      SunOS 5.11      11.3    September 2015
    root@em7pr1-ascs-01:~# pkg info ha-cluster-full
              Name: ha-cluster/group-package/ha-cluster-full
           Summary: Oracle Solaris Cluster full installation group package
       Description: Oracle Solaris Cluster full installation group package
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: ha-cluster
           Version: 4.3 (Oracle Solaris Cluster 4.3.0.24.0)
     Build Release: 5.11
            Branch: 0.24.0
    Packaging Date: Wed Aug 26 23:33:36 2015
              Size: 5.88 kB
              FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.3,5.11-0.24.0:20150826T233336Z
    root@em7pr1-ascs-01:~# pkg info entire         
              Name: entire
           Summary: Incorporation to lock all system packages to the same build
       Description: This package constrains system package versions to the same
                    build.  WARNING: Proper system update and correct package
                    selection depend on the presence of this incorporation.
                    Removing this package will result in an unsupported system.
          Category: Meta Packages/Incorporations
             State: Installed
         Publisher: solaris
           Version: 0.5.11 (Oracle Solaris 11.3.1.5.0)
     Build Release: 5.11
            Branch: 0.175.3.1.0.5.0
    Packaging Date: Tue Oct 06 14:00:51 2015
              Size: 5.46 kB
              FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.3.1.0.5.0:20151006T140051Z
     
    root@em7pr1-ascs-01:~# ipadm show-addr
    ADDROBJ           TYPE     STATE        ADDR
    lo0/?             inherited ok          127.0.0.1/8
    sc_ipmp0/?        inherited ok          10.129.97.152/20
    lo0/?             inherited ok          ::1/128
     
    root@em7pr1-ascs-01:~# netstat -rn
     
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref     Use     Interface 
    -------------------- -------------------- ----- ----- ---------- --------- 
    default              10.129.96.1          UG        4      94043 sc_ipmp0  
    default              10.129.96.1          UG        2      18290           
    10.129.96.0          10.129.97.152        U         5         16 sc_ipmp0  
    127.0.0.1            127.0.0.1            UH        2         18 lo0       
     
    Routing Table: IPv6
      Destination/Mask            Gateway                   Flags Ref   Use    If   
    --------------------------- --------------------------- ----- --- ------- ----- 
    ::1                         ::1                         UH      2      98 lo0   
     
    root@em7pr1-ascs-01:~# getent hosts www.yahoo.com
    98.138.252.30   fd-fp3.wg1.b.yahoo.com www.yahoo.com
    98.138.253.109  fd-fp3.wg1.b.yahoo.com www.yahoo.com

    Listing 39: Verifying the configuration of node 1 in the zone cluster.

     

    root@sapm7adm-haapp-0201:~# zlogin pr1-ascs-zc
    [Connected to zone 'pr1-ascs-zc' pts/2]
    Last login: Sun Mar  6 20:40:22 2016 on pts/3
    Oracle Corporation      SunOS 5.11      11.3    September 2015
    root@em7pr1-ascs-02:~# pkg info ha-cluster-full
              Name: ha-cluster/group-package/ha-cluster-full
           Summary: Oracle Solaris Cluster full installation group package
       Description: Oracle Solaris Cluster full installation group package
          Category: Meta Packages/Group Packages
             State: Installed
         Publisher: ha-cluster
           Version: 4.3 (Oracle Solaris Cluster 4.3.0.24.0)
     Build Release: 5.11
            Branch: 0.24.0
    Packaging Date: Wed Aug 26 23:33:36 2015
              Size: 5.88 kB
              FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.3,5.11-0.24.0:20150826T233336Z
    root@em7pr1-ascs-02:~# pkg info entire         
              Name: entire
           Summary: Incorporation to lock all system packages to the same build
       Description: This package constrains system package versions to the same
                    build.  WARNING: Proper system update and correct package
                    selection depend on the presence of this incorporation.
                    Removing this package will result in an unsupported system.
          Category: Meta Packages/Incorporations
             State: Installed
         Publisher: solaris
           Version: 0.5.11 (Oracle Solaris 11.3.1.5.0)
     Build Release: 5.11
            Branch: 0.175.3.1.0.5.0
    Packaging Date: Tue Oct 06 14:00:51 2015
              Size: 5.46 kB
              FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.3.1.0.5.0:20151006T140051Z
    root@em7pr1-ascs-02:~# ipadm show-addr
    ADDROBJ           TYPE     STATE        ADDR
    lo0/?             inherited ok          127.0.0.1/8
    sc_ipmp0/?        inherited ok          10.129.97.153/20
    lo0/?             inherited ok          ::1/128
    root@em7pr1-ascs-02:~# netstat -rn
     
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref     Use     Interface 
    -------------------- -------------------- ----- ----- ---------- --------- 
    default              10.129.96.1          UG        3      30821 sc_ipmp0  
    default              10.129.96.1          UG        3       3909           
    10.129.96.0          10.129.97.153        U         6         16 sc_ipmp0  
    127.0.0.1            127.0.0.1            UH        2          0 lo0       
     
    Routing Table: IPv6
      Destination/Mask            Gateway                   Flags Ref   Use    If   
    --------------------------- --------------------------- ----- --- ------- ----- 
    ::1                         ::1                         UH      4      28 lo0   
    root@em7pr1-ascs-02:~# getent hosts www.yahoo.com
    98.138.253.109  fd-fp3.wg1.b.yahoo.com www.yahoo.com
    98.138.252.30   fd-fp3.wg1.b.yahoo.com www.yahoo.com

    Listing 40: Verifying the configuration of node 2 in the zone cluster.

     

    Now the zone cluster pr1-ascs-zc exists with two nodes, em7pr1-ascs-01 and em7pr1-ascs-02, and it is up and running.

     

    In the example deployment, a separate zone cluster (pr1-haapps-zc) is created and subsequently installed with the SAP Primary Application Server (PAS) and other SAP application servers that require the highest levels of availability. To create this second zone cluster, repeat the steps in the section "Creating Zone Clusters" to construct the zone cluster pr1-haapps-zc. Using a separate zone cluster for SAP application servers provides deployment isolation that protects the central services from various possible risks, such as operator errors, or the CPU utilization of application servers impacting the responsiveness of central services. If the SAP system has only a lightweight PAS as a mission-critical application service, it could be deployed in the same zone cluster as the central services (ASCS).

     

    When deploying the pr1-haapps-zc zone cluster, use the other Unified Archives to install the zone cluster nodes:

     

    -rw-r--r--   1 nobody   nobody       13G Mar  6 22:05 epr1-haapps-01.uar
    -rw-r--r--   1 nobody   nobody      630M Mar  6 23:40 epr1-haapps-02.uar

    Listing 41: Unified Archive files for the nodes in the pr1-haapps-zc zone cluster.

     

    Define a different set of IB host entries in /etc/hosts with IP addresses on the IB network:

     

    192.168.28.154   im7pr1-haapps-01
    192.168.28.155   im7pr1-haapps-02

    Listing 42: IB hostnames and IP addresses are defined for nodes in the pr1-haapps-zc zone cluster.

     

    Listing 43 shows both zone clusters up and running in the HAAPP domains.

     

    root@sapm7adm-haapp-0101:~# clzc status 
     
    === Zone Clusters ===
     
    --- Zone Cluster Status ---
     
    Name            Brand     Node Name             Zone Host Name     Status   Zone Status
    ----            -----     ---------             --------------     ------   -----------
    pr1-ascs-zc     solaris   sapm7adm-haapp-0101   em7pr1-ascs-01     Online   Running
                              sapm7adm-haapp-0201   em7pr1-ascs-02     Online   Running
     
    pr1-haapps-zc   solaris   sapm7adm-haapp-0101   em7pr1-haapps-01   Online   Running
                              sapm7adm-haapp-0201   em7pr1-haapps-02   Online   Running

    Listing 43: Separate zone clusters exist for ASCS services and PAS and HA application services.

     

    Step 7. Configure the zone cluster resource control.

     

    To provide isolation for CPU resources that run ASCS and SAP application servers, the zone clusters can be configured with dedicated CPUs. As an example, the commands in Listing 44 assign one SPARC M7 processor core from Oracle (eight hardware threads) for ASCS, and two SPARC M7 processor cores (16 hardware threads) for mission-critical application servers:

     

    root@sapm7adm-haapp-01:~# clzc configure pr1-ascs-zc
    clzc:pr1-ascs-zc> add dedicated-cpu
    clzc:pr1-ascs-zc:dedicated-cpu> set ncpus=8
    clzc:pr1-ascs-zc:dedicated-cpu> end
    clzc:pr1-ascs-zc> exit
    root@sapm7adm-haapp-01:~# clzc configure pr1-haapps-zc
    clzc:pr1-haapps-zc> add dedicated-cpu
    clzc:pr1-haapps-zc:dedicated-cpu> set ncpus=16
    clzc:pr1-haapps-zc:dedicated-cpu> end
    clzc:pr1-haapps-zc> exit

    Listing 44: Configuring dedicated CPU resources for zone clusters.

     

    Reboot the zone clusters for the new resource configuration parameters to take effect:

     

    root@sapm7adm-haapp-01:~# clzc reboot +

    Listing 45: Rebooting the zone clusters.

     

    Verify that the CPU resources are recognized in the nodes of both zone clusters:

     

    root@sapm7adm-haapp-0101:~# zlogin pr1-haapps-zc
    root@em7pr1-haapps-01:~# psrinfo -pv
    The physical processor has 2 cores and 16 virtual processors (8-23)
      The core has 8 virtual processors (8-15)
      The core has 8 virtual processors (16-23)
        SPARC-M7 (chipid 2, clock 4133 MHz)

    Listing 46: Verifying the resources recognized by pr1-haapps-zc.

     

    root@sapm7adm-haapp-0101:~# zlogin pr1-ascs-zc
    root@em7pr1-ascs-01:~# psrinfo -pv
    The physical processor has 8 virtual processors (0-7)
      SPARC-M7 (chipid 2, clock 4133 MHz)

    Listing 47: Verifying the resources recognized by pr1-ascs-zc.

     

    Final Thoughts

     

    To deploy highly available SAP applications, it is necessary to implement critical SAP components and services by clustering virtual environments. This article describes how to install Oracle Solaris Cluster and configure a zone cluster across two nodes; this cluster will be used for the ABAP Central Services instance (ASCS) and Enqueue Replication Server instance (ERS). By repeating the zone cluster creation steps shown here, a second zone cluster can be created for the SAP Primary Application Server (PAS) and any other mission-critical Additional Application Server (AAS). The third article in this series (Part 3: Installing Highly Available SAP with Oracle Solaris Cluster on Oracle SuperCluster) gives step-by-step instructions to install and configure the SAP ABAP stack components in these zone clusters.

     

    See Also

     

    Refer to these resources for more information:

     

    Online Resources

     

     

    White Papers

     

     

    Documentation

     

     

    About the Authors

     

    Jan Brosowski is a principal sales consultant for Oracle Systems in Europe North. Located in Walldorf, Germany, he is responsible for developing customer-specific architectures and operating models for both SAP and Hyperion systems, accompanying the projects from the requirements specification process to going live. Brosowski holds a Master of Business and Engineering degree and has been working for over 15 years with SAP systems in different roles.

     

    Victor Galis is a master sales consultant, part of the global Oracle Solution Center organization. He supports customers and sales teams architecting SAP environments based on Oracle hardware and technology. He works with SAP Basis and DBA teams, systems and storage administrators, as well as business owners and executives. His role is to understand current environments, business requirements, and pain points as well as future growth and map them to SAP landscapes that meet both performance and high availability expectations. He has been involved with many SAP on Oracle SuperCluster customer environments as an architect and has provided deployment and go-live assistance. Galis is a SAP-certified consultant and Oracle Database administrator.

     

    Gia-Khanh Nguyen is an architect of the Oracle Solaris Cluster product. He contributes to product requirement and design specifications for the support of enterprise solutions on Oracle systems, and develops demonstrations of key features.

     

    Pierre Reynes is a solution manager for Oracle Optimized Solution for SAP and Oracle Optimized Solution for PeopleSoft. He is responsible for driving the strategy and efforts to help raise customer and market awareness for Oracle Optimized Solutions in these areas. Reynes has over 25 years of experience in the computer and network industries.