Forum Stats

  • 3,735,205 Users
  • 2,247,143 Discussions
  • 7,857,785 Comments

Discussions

Setting Up Highly Available SAP Systems on Oracle SuperCluster—Part 2: Deploying and Configuring Ora

steph-choyer-Oracle
steph-choyer-Oracle Member Posts: 101
edited Mar 6, 2017 4:32AM in Optimized Solutions

by Jan Brosowski, Victor Galis, Gia-Khanh Nguyen, and Pierre Reynes

This article is Part 2 of a three-part series that describes steps for setting up SAP on Oracle SuperCluster in a highly available configuration. This article focuses on procedures for deploying the Oracle Solaris Cluster software and using it to configure two zone clusters across two nodes.

Table of Contents
Introduction
Preparing Cluster Interconnects and iSCSI LUN to Be Used as the Quorum Device
Installing and Configuring the Oracle Solaris Cluster 4.3 Software
Creating a Cluster Using the Oracle Solaris Cluster Manager BUI
Migrating Zones from the Source Platform
Creating Zone Clusters
Final Thoughts
See Also
About the Authors

Introduction

This article is Part 2 of a three-part series that provides best practices and recommendations for setting up highly available SAP systems on Oracle engineered systems. A team of Oracle engineers and SAP experts used Oracle SuperCluster M7 in a sample deployment to compile and test the step-by-step procedures and recommendations provided in this article series.

To achieve high availability (HA), it is necessary to put mission-critical SAP components under the control of Oracle Solaris Cluster, creating a cluster of two or more zones on Oracle SuperCluster. Oracle Solaris Cluster can then orchestrate failover or disaster recovery strategies while managing infrastructure resources such as database, network connectivity, and shared storage.

OOS_Logo_small125.png
in collaboration with
OSC_Logo_small125.png
Oracle Optimized Solutions provide tested and proven best practices for how to run software products on Oracle systems. Learn more.

This article series ("Setting up Highly Available SAP Systems on Oracle SuperCluster") divides configuration tasks into three articles:

  • Part 1: Configuring Virtualization for SAP on Oracle SuperCluster. The first article describes the steps needed to prepare virtual environments on Oracle SuperCluster. These virtual environments are Oracle VM Server for SPARC logical domains (LDOMs), which are separate, isolated I/O domains that help improve application resiliency. Two pairs of I/O domains are created on the nodes (Figure 1): one domain pair for SAP components and application servers that require advanced HA, and a second pair for application servers that are not mission-critical.
  • Part 2: Deploying and Configuring Oracle Solaris Cluster for SAP on Oracle SuperCluster. Part 2 is this article, which describes the steps for installing the Oracle Solaris Cluster software and using it to configure two zone clusters across the two nodes shown in Figure 1. The first zone cluster is dedicated to the most-critical SAP components: the ABAP Central Services instance (ASCS) and Enqueue Replication Server instance (ERS). The second zone cluster is used for the SAP Primary Application Server (PAS) and any other mission-critical Additional Application Server (AAS).
  • Part 3: Installing Highly Available SAP with Oracle Solaris Cluster on Oracle SuperCluster. The third article describes the step-by-step procedures for installing SAP ABAP stack components in the zone clusters and configuring them for HA. Oracle Solaris Cluster implements the concept of logical hostnames that are used to monitor node availability and, if necessary, it can move an IP address (managed as a resource) transparently to other nodes.

f1.png

Figure 1. An HA configuration for SAP uses the zone clustering capabilities of Oracle Solaris Cluster.

Preparing Cluster Interconnects and iSCSI LUN to Be Used as the Quorum Device

This section contains step-by-step instructions to prepare Oracle SuperCluster shared storage and network resources for use by Oracle Solaris Cluster, which involves

  • Creating InfiniBand (IB) partition data links and IP interfaces for the cluster interconnects
  • Configuring the iSCSI LUN that is used as a quorum device

Step 1. Creating the IB partition data links and IP interfaces for the cluster interconnects.

Before Oracle Solaris Cluster is installed and configured, it is necessary to configure storage and network interconnects for the two-node cluster that the software will manage. Dedicated IB partitions are created to support the internode traffic for the Oracle Solaris Cluster interconnect.

Log in to each HAAPP I/O domain (the first article in this series created the HAAPP domains as sapm7adm-haapp-0101 and sapm7adm-haapp-0201). Identify the InfiniBand (IB) links in the domain, and create the IB partition data links and IP interfaces for the two cluster interconnects, ic1 and ic2. Listing 1 and Listing 2 show the commands executed in the domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

[email protected]:~# dladm show-ib
LINK      HCAGUID        PORTGUID       PORT STATE   GWNAME       GWPORT   PKEYS
net6      10E100014AC620 10E000654AC622 2    up      --           --       8503,8512,FFFF
net5      10E100014AC620 10E000654AC621 1    up      --           --       8503,8511,FFFF
 
[email protected]:~# dladm create-part -l net5 -P 8511 ic1
[email protected]:~# dladm create-part -l net6 -P 8512 ic2
[email protected]:~# dladm show-part
LINK         PKEY  OVER         STATE    FLAGS
sys-root0    8503  net5         up       f---
sys-root1    8503  net6         up       f---
stor_ipmp0_0 8503  net6         up       f---
stor_ipmp0_1 8503  net5         up       f---
ic1          8511  net5         unknown  ----
ic2          8512  net6         unknown  ----
[email protected]:~# ipadm create-ip ic1
[email protected]:~# ipadm create-ip ic2

Listing 1: Configuring the interconnects on domain sapm7adm-haapp-0101.

[email protected]:~# dladm show-ib
LINK      HCAGUID        PORTGUID       PORT STATE   GWNAME       GWPORT   PKEYS
net5      10E100014AA7B0 10E000654AA7B1 1    up      --           --       8503,8511,FFFF
net6      10E100014AA7B0 10E000654AA7B2 2    up      --           --       8503,8512,FFFF
[email protected]:~# dladm create-part -l net5 -P 8511 ic1
[email protected]:~# dladm create-part -l net6 -P 8512 ic2
[email protected]:~# dladm show-part
LINK         PKEY  OVER         STATE    FLAGS
sys-root0    8503  net5         up       f---
sys-root1    8503  net6         up       f---
stor_ipmp0_0 8503  net6         up       f---
stor_ipmp0_1 8503  net5         up       f---
ic1          8511  net5         unknown  ----
ic2          8512  net6         unknown  ----
[email protected]:~# ipadm create-ip ic1
[email protected]:~# ipadm create-ip ic2
[email protected]:~# iscsiadm modify discovery -s enable
[email protected]:~# iscsiadm modify discovery -s enable

Listing 2: Configuring the interconnects in the domain sapm7adm-haapp-0201.

Interfaces ic1 and ic2 are now prepared to be used as cluster interconnects using partitions 8511 and 8512. It is important to configure the interfaces to use the same partitions on both nodes. In this example, ic1 is on partition 8511 and ic2 is on partition 8512 on both nodes. The interfaces are configured on different ports connected to different IB switches, preventing the failure of a single switch from disabling both interconnects.

Step 2. Configuring the iSCSI LUN to be used as the quorum device.

A quorum device is implemented as an iSCSI LUN on the shared internal Oracle ZFS Storage Appliance. (Oracle Solaris Cluster uses the quorum device to prevent data corruption caused by a catastrophic situation, such as split brain or amnesia, that could otherwise result in data corruption.)

Configuring the quorum device requires several substeps, as follows:

  • Identifying the iSCSI initiators
  • Creating a quorum iSCSI initiator group
  • Creating the quorum iSCSI target and target group
  • Creating a quorum project and an iSCSI LUN for the quorum device
  • Configuring the cluster nodes to see the quorum iSCSI LUN

On Oracle SuperCluster, iSCSI LUNs are used as boot devices. The global zone is set up for accessing the iSCSI LUNs from the internal Oracle ZFS Storage Appliance.

Identify the iSCSI initiator nodes used to boot each node. Listing 3 and Listing 4 show the commands executed in domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

[email protected]:~# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:boot.00144ff828d4
Initiator node alias: -
        Login Parameters (Default/Configured):
                Header Digest: NONE/-
                Data Digest: NONE/-
                Max Connections: 65535/-
        Authentication Type: NONE
        RADIUS Server: NONE
        RADIUS Access: disabled
        Tunable Parameters (Default/Configured):
                Session Login Response Time: 60/-
                Maximum Connection Retry Time: 180/240
                Login Retry Time Interval: 60/-
        Configured Sessions: 1

Listing 3: Identifying the initiator nodes on the first cluster node.

Listing 4 shows the commands in the second domain, sapm7adm-haapp-0201.

[email protected]:~# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:boot.00144ff9a0f9
Initiator node alias: -
        Login Parameters (Default/Configured):
                Header Digest: NONE/-
                Data Digest: NONE/-
                Max Connections: 65535/-
        Authentication Type: NONE
        RADIUS Server: NONE
        RADIUS Access: disabled
        Tunable Parameters (Default/Configured):
                Session Login Response Time: 60/-
                Maximum Connection Retry Time: 180/240
                Login Retry Time Interval: 60/-
        Configured Sessions: 1

Listing 4: Identifying the initiator nodes on the second cluster node.

Step 3. Configure the quorum device.

Identify the Oracle ZFS Storage Appliance hostnames, log in, and list the iSCSI initiators. The hostnames for the Oracle ZFS Storage Appliance cluster heads in the example deployment are

10.129.112.136 sapm7-h1-storadm
10.129.112.137 sapm7-h2-storadm

Log in to each cluster head host and list the iSCSI initiators (Listing 5).

sapm7-h1-storadm:configuration san initiators iscsi> ls
Initiators:
 
NAME          ALIAS
initiator-000 init_sc1cn1dom0
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.0010e0479e74
 
initiator-001 init_sc1cn1dom1
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ff8faae
 
initiator-002 init_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ff97c9b
 
initiator-003 init_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ff828d4
 
initiator-004 init_sc1cn2dom0
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.0010e0479e75
 
initiator-005 init_sc1cn2dom1
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ffbf174
 
initiator-006 init_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ffb3b6c
initiator-007 init_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
              |
              +-> INITIATOR
                  iqn.1986-03.com.sun:boot.00144ff9a0f9
 
 
Children:
                           groups => Manage groups

Listing 5: Identifying the iSCSI initiators on each cluster node.

Note that the iSCSI initiators used as the iSCSI LUNs for booting the HAAPP domain already exist. The commands in Listing 6 create the quorum iSCSI initiator group (QuorumGroup-haapp-01) that contains both initiators (because both nodes must be able to access the quorum LUN):

sapm7-h1-storadm:configuration san initiators iscsi groups> create
sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> ls
Properties:
                          name = (unset)
                    initiators = (unset)
 
sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> set name=QuorumGroup-haapp-01
                          name = QuorumGroup-haapp-01 (uncommitted)
sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> set initiators=\
iqn.1986-03.com.sun:boot.00144ff828d4,iqn.1986-03.com.sun:boot.00144ff9a0f9
                    initiators = iqn.1986-03.com.sun:boot.00144ff828d4,iqn.1986-03.com.sun:boot.00144ff9a0f9 (uncommitted)
sapm7-h1-storadm:configuration san initiators iscsi group-010 (uncommitted)> commit
sapm7-h1-storadm:configuration san initiators iscsi groups> ls
Groups:
 
GROUP     NAME
group-000 QuorumGroup-haapp-01
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff9a0f9
              iqn.1986-03.com.sun:boot.00144ff828d4
 
group-001 initgrp_sc1cn1_service
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff8faae
              iqn.1986-03.com.sun:boot.0010e0479e74
 
group-002 initgrp_sc1cn1dom0
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.0010e0479e74
 
group-003 initgrp_sc1cn1dom1
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff8faae
 
group-004 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff97c9b
 
group-005 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff828d4
 
group-006 initgrp_sc1cn2_service
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ffbf174
              iqn.1986-03.com.sun:boot.0010e0479e75
 
group-007 initgrp_sc1cn2dom0
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.0010e0479e75
 
group-008 initgrp_sc1cn2dom1
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ffbf174
 
group-009 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ffb3b6c
 
group-010 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
          |
          +-> INITIATORS
              iqn.1986-03.com.sun:boot.00144ff9a0f9
 
sapm7-h1-storadm:configuration san initiators iscsi groups> cd ../..
sapm7-h1-storadm:configuration san initiators> cd ..

Listing 6: Creating the quorum iSCSI initiator group.

Next, create a quorum iSCSI target, which will subsequently be added to a target group. First, note that ipmp3 is the interface hosting the Oracle ZFS Storage Appliance traffic over an IB address for head 1 on the appliance. Create a quorum iSCSI target that uses that interface.

sapm7-h1-storadm:configuration net interfaces> ls
Interfaces:
 
INTERFACE   STATE    CLASS LINKS       ADDRS                  LABEL
ibpart1     up       ip    ibpart1     0.0.0.0/32             p8503_ibp0
ibpart2     up       ip    ibpart2     0.0.0.0/32             p8503_ibp1
ibpart3     offline  ip    ibpart3     0.0.0.0/32             p8503_ibp0
ibpart4     offline  ip    ibpart4     0.0.0.0/32             p8503_ibp1
ibpart5     up       ip    ibpart5     0.0.0.0/32             p8503_ibp0
ibpart6     up       ip    ibpart6     0.0.0.0/32             p8503_ibp1
ibpart7     offline  ip    ibpart7     0.0.0.0/32             p8503_ibp0
ibpart8     offline  ip    ibpart8     0.0.0.0/32             p8503_ibp1
igb0        up       ip    igb0        10.129.112.136/20      igb0
igb2        up       ip    igb2        10.129.97.146/20       igb2
ipmp1       up       ipmp  ibpart1     192.168.24.9/22        ipmp_versaboot1
                           ibpart2                            
ipmp2       offline  ipmp  ibpart3     192.168.24.10/22       ipmp_versaboot2
                           ibpart4                            
ipmp3       up       ipmp  ibpart5     192.168.28.1/22        ipmp_stor1
                           ibpart6                            
ipmp4       offline  ipmp  ibpart7     192.168.28.2/22        ipmp_stor2
                           ibpart8                            
vnic1       up       ip    vnic1       10.129.112.144/20      vnic1
vnic2       offline  ip    vnic2       10.129.112.145/20      vnic2
 
sapm7-h1-storadm:configuration san> targets iscsi
sapm7-h1-storadm:configuration san targets iscsi> create
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set alias=QuorumTarget-haapp-01
                         alias = QuorumTarget-haapp-01 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set interfaces=ipmp3
                    interfaces = ipmp3 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> commit
sapm7-h1-storadm:configuration san targets iscsi> show
Targets:
 
TARGET     ALIAS          
target-000 QuorumTarget-haapp-01
           |
           +-> IQN
               iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
 
target-001 targ_sc1sn1_iodinstall
           |
           +-> IQN
               iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
 
target-002 targ_sc1sn1_ipmp1
           |
           +-> IQN
               iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
 
target-003 targ_sc1sn1_ipmp2
           |
           +-> IQN
               iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3

Listing 7: Creating the quorum iSCSI target.

Using the target just created, define a quorum iSCSI target group (QuorumGroup-haapp-01), as shown in Listing 8.

sapm7-h1-storadm:configuration san targets iscsi> groups
sapm7-h1-storadm:configuration san targets iscsi groups> create
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set name=\
QuorumGroup-haapp-01
                          name = QuorumGroup-haapp-01 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set targets=\
iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
                       targets = iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> commit 
sapm7-h1-storadm:configuration san targets iscsi groups> show
Groups:
 
GROUP     NAME
group-000 QuorumGroup-haapp-01
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
group-001 targgrp_sc1sn1_iodinstall
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
 
group-002 targgrp_sc1sn1_ipmp1
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
 
group-003 targgrp_sc1sn1_ipmp2
          |
          +-> TARGETS
              iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3

Listing 8: Creating a group for the quorum iSCSI target.

Create a quorum project and an iSCSI LUN for the quorum device, as shown in Listing 9.

sapm7-h1-storadm:configuration san targets iscsi groups> cd /
sapm7-h1-storadm:> shares
sapm7-h1-storadm:shares> ls
Properties:
                          pool = supercluster1
 
Projects:
                     IPS-repos
                      OSC-data
                     OSC-oeshm
                          OVMT
                       default
                    sc1-ldomfs
Children:
                       encryption => Manage encryption keys
                      replication => Manage remote replication
                           schema => Define custom property schema
 
sapm7-h1-storadm:shares> project QuorumProject
sapm7-h1-storadm:shares QuorumProject (uncommitted)> commit
sapm7-h1-storadm:shares> select QuorumProject
sapm7-h1-storadm:shares QuorumProject> lun QuorumLUN-haapp-01
sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set volsize=1G
                       volsize = 1G (uncommitted)
sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set targetgroup=\
QuorumGroup-haapp-01
                   targetgroup = QuorumGroup-haapp-01 (uncommitted)
sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set initiatorgroup=\
QuorumGroup-haapp-01
                initiatorgroup = QuorumGroup-haapp-01 (uncommitted)
sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> set lunumber=0
                      lunumber = 0 (uncommitted)
sapm7-h1-storadm:shares QuorumProject/QuorumLUN-haapp-01 (uncommitted)> commit
sapm7-h1-storadm:shares QuorumProject> ls
Properties:
                    aclinherit = restricted
                       aclmode = discard
                         atime = true
                      checksum = fletcher4
                   compression = off
                         dedup = false
                 compressratio = 100
                        copies = 1
                      creation = Fri Jan 22 2016 00:15:15 GMT+0000 (UTC)
                       logbias = latency
                    mountpoint = /export
                         quota = 0
                      readonly = false
                    recordsize = 128K
                   reservation = 0
                      rstchown = true
                secondarycache = all
                        nbmand = false
                      sharesmb = off
                      sharenfs = on
                       snapdir = hidden
                         vscan = false
              defaultuserquota = 0
             defaultgroupquota = 0
                    encryption = off
                     snaplabel = 
                      sharedav = off
                      shareftp = off
                     sharesftp = off
                     sharetftp = off
                          pool = supercluster1
                canonical_name = supercluster1/local/QuorumProject
                 default_group = other
           default_permissions = 700
                default_sparse = false
                  default_user = nobody
          default_volblocksize = 8K
               default_volsize = 0
                      exported = true
                     nodestroy = false
                  maxblocksize = 1M
                    space_data = 31K
              space_unused_res = 0
       space_unused_res_shares = 0
               space_snapshots = 0
               space_available = 7.10T
                   space_total = 31K
                        origin = 
 
Shares:
 
 
LUNs:
 
NAME                VOLSIZE ENCRYPTED     GUID
QuorumLUN-haapp-01  1G     off           600144F09EF4EF20000056A1756A0015
 
Children:
                           groups => View per-group usage and manage group
                                     quotas
                      replication => Manage remote replication
                        snapshots => Manage snapshots
                            users => View per-user usage and manage user quotas

Listing 9: Creating a quorum project and an iSCSI LUN for the quorum device.

Configure a statically configured iSCSI target and view the quorum LUN on each cluster node. Listing 10 and Listing 11 show the commands executed in domains sapm7adm-haapp-0101 and sapm7adm-haapp-0201, respectively.

[email protected]:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-\
5ec2-6331-bbca-fa190035423f,192.168.28.1
[email protected]:~# iscsiadm list static-config
Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f,192.168.28.1:3260
[email protected]:~# iscsiadm list target -S
Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
        Alias: QuorumTarget-haapp-01
        TPGT: 2
        ISID: 4000002a0000
        Connections: 1
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
 
Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
        Alias: targ_sc1sn1_ipmp1
        TPGT: 2
        ISID: 4000002a0001
        Connections: 1
        LUN: 1
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2
 
Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
        Alias: targ_sc1sn1_ipmp1
        TPGT: 2
        ISID: 4000002a0000
        Connections: 1
        LUN: 1
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2

Listing 10: Configuring one cluster node's iSCSI target to see the quorum LUN.

[email protected]:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-\
5ec2-6331-bbca-fa190035423f,192.168.28.1
[email protected]:~# iscsiadm list static-config
Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f,192.168.28.1:3260
[email protected]:~# iscsiadm list target -S
Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
        Alias: QuorumTarget-haapp-01
        TPGT: 2
        ISID: 4000002a0000
        Connections: 1
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
 
Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
        Alias: targ_sc1sn1_ipmp2
        TPGT: 2
        ISID: 4000002a0001
        Connections: 1
        LUN: 2
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2
 
Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
        Alias: targ_sc1sn1_ipmp2
        TPGT: 2
        ISID: 4000002a0000
        Connections: 1
        LUN: 2
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
        LUN: 0
             Vendor:  SUN     
             Product: Sun Storage 7000
             OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2

Listing 11: Configuring the other cluster node's iSCSI target to see the quorum LUN.

The newly created iSCSI LUN now can be accessed from both nodes. Oracle Solaris Cluster creation will automatically recognize it, and use it as the default quorum device option.

Installing and Configuring the Oracle Solaris Cluster 4.3 Software

Step 1. Install the solaris-small-server package group on both nodes.

Refer to the Oracle Solaris Cluster 4.3 Software Installation Guide for detailed information about installing the Oracle Solaris Cluster 4.3 software. The Oracle Solaris Cluster software requires at least the Oracle Solaris solaris-small-server package group. Because the I/O domains in this example were originally installed with the solaris-minimal-server package group, they require the installation of the solaris-small-server package group on both nodes.

Listing 12 shows that the solaris-small-server package group is not installed on node 1, sapm7adm-haapp-0101.

[email protected]:~# pkg info -r solaris-small-server
          Name: group/system/solaris-small-server
       Summary: Oracle Solaris Small Server
   Description: Provides a useful command-line Oracle Solaris environment
      Category: Meta Packages/Group Packages
         State: Not installed
     Publisher: solaris
       Version: 0.5.11
 Build Release: 5.11
        Branch: 0.175.3.1.0.5.0
Packaging Date: Tue Oct 06 13:56:21 2015
          Size: 5.46 kB
          FMRI: pkg://solaris/group/system/[email protected],5.11-0.175.3.1.0.5.0:20151006T135621Z

Listing 12: The solaris-small-server package group is not installed on node 1.

Repeat this step on the second node, sapm7adm-haapp-0201, to see if the solaris-small-server package group is already installed.

Perform the steps in Listing 13 to install the Oracle Solaris solaris-small-server package group on node 1, sapm7adm-haapp-0101.

[email protected]:~# pkg install --accept --be-name solaris-small solaris-small-server 
           Packages to install:  92
       Create boot environment: Yes
Create backup boot environment:  No
 
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              92/92   13209/13209  494.7/494.7    0B/s
 
PHASE                                          ITEMS
Installing new actions                   19090/19090
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           2/2 
 
A clone of install exists and has been updated and activated.
On the next boot the Boot Environment solaris-small will be
mounted on '/'.  Reboot when ready to switch to this updated BE.
 
Updating package cache                           2/2 
[email protected]:~# beadm list
BE            Flags Mountpoint Space  Policy Created          
--            ----- ---------- -----  ------ -------          
install       N     /          484.0K static 2016-01-19 16:53 
solaris-small R     -          4.72G  static 2016-01-21 16:35 

Listing 13: Installing the solaris-small-server package group on node 1.

Repeat the same steps to install the solaris-small-server package group on the second node, sapm7adm-haapp-0201.

Step 2. Reboot both nodes.

After installing the package group on both nodes, reboot the first node, sapm7adm-haapp-0101, to mount the solaris-small boot environment as /.

[email protected]:~# reboot
[email protected]:~# beadm list
BE            Flags Mountpoint Space  Policy Created          
--            ----- ---------- -----  ------ -------          
install       -     -          88.06M static 2016-01-19 16:53 
solaris-small NR    /          4.85G  static 2016-01-21 16:35 

Listing 14: Rebooting Oracle Solaris on node 1.

Repeat the same step to reboot the second node, sapm7adm-haapp-0201.

Step 3. Install the Oracle Solaris Cluster software.

It is recommended that you install the ha-cluster-full package group because it contains packages for all data services implemented by Oracle Solaris Cluster. If any other package group is installed (such as the ha-cluster-minimal package group), SAP-specific Oracle Solaris Cluster packages must be added manually before you can properly configure clustered resources. Check the Oracle Solaris Cluster 4.3 Software Installation Guide for complete information on package recommendations.

Locate the repository for the Oracle Solaris Cluster software, configure the repository location for the ha-cluster publisher, and install the ha-cluster-full package group (Listing 15).

[email protected]:~# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F file:///net/192.168.28.1/export/IPS-repos/solaris11/repo/
exa-family                  origin   online F file:///net/192.168.28.1/export/IPS-repos/exafamily/repo/
 
[email protected]:~# ls /net/192.168.28.1/export/IPS-repos/osc4/repo
pkg5.repository  publisher
 
[email protected]:~# pkg set-publisher -g file:///net/192.168.28.1/export/IPS-repos\
/osc4/repo ha-cluster
 
[email protected]:~# pkg info -r ha-cluster-full
          Name: ha-cluster/group-package/ha-cluster-full
       Summary: Oracle Solaris Cluster full installation group package
   Description: Oracle Solaris Cluster full installation group package
      Category: Meta Packages/Group Packages
         State: Not installed
     Publisher: ha-cluster
       Version: 4.3 (Oracle Solaris Cluster 4.3.0.24.0)
 Build Release: 5.11
        Branch: 0.24.0
Packaging Date: Wed Aug 26 23:33:36 2015
          Size: 5.88 kB
          FMRI: pkg://ha-cluster/ha-cluster/group-package/[email protected],5.11-0.24.0:20150826T233336Z
 
[email protected]:~# pkg install --accept --be-name ha-cluster ha-cluster-full
           Packages to install:  96
       Create boot environment: Yes
Create backup boot environment:  No
 
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              96/96     7794/7794  324.6/324.6    0B/s
 
PHASE                                          ITEMS
Installing new actions                   11243/11243
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           3/3 
 
A clone of solaris-small exists and has been updated and activated.
On the next boot the Boot Environment ha-cluster will be
mounted on '/'.  Reboot when ready to switch to this updated BE.
 
Updating package cache                           3/3 

Listing 15: Installing Oracle Solaris Cluster on node 1.

Repeat these steps to configure the ha-cluster publisher and install the Oracle Solaris Cluster package group ha-cluster-full on node 2.

After installing Oracle Solaris Cluster on both nodes, reboot node 1 and destroy the backup boot environment ha-cluster-backup-1, as shown.

[email protected]:~# reboot
[email protected]:~# beadm list
BE                  Flags Mountpoint Space   Policy Created          
--                  ----- ---------- -----   ------ -------          
ha-cluster          NR    /          6.60G   static 2016-01-21 16:47 
ha-cluster-backup-1 -     -          123.45M static 2016-01-21 16:51 
install             -     -          88.06M  static 2016-01-19 16:53 
solaris-small       -     -          14.02M  static 2016-01-21 16:35 
[email protected]:~# beadm destroy -F ha-cluster-backup-1

Listing 16: Rebooting after the installation of Oracle Solaris Cluster on node 1.

Repeat the same step to reboot and destroy the boot environment for node 2, sapm7adm-haapp-0201.

Step 4. Prepare for cluster creation.

The steps in Listing 17 set up necessary prerequisites on node 1 (sapm7adm-haapp-0101) before creating the cluster.

[email protected]:~# svccfg -s svc:/network/rpc/bind setprop config/local_only = boolean: false
[email protected]:~# svccfg -s svc:/network/rpc/bind listprop config/local_only
config/local_only boolean     false
[email protected]:~# netadm list -p ncp defaultfixed
TYPE        PROFILE        STATE
ncp         DefaultFixed   online

Listing 17: Prerequisites for cluster creation on node 1.

Repeat the steps in Listing 17 to configure prerequisites on node 2 (sapm7adm-haapp-0201).

Step 5. Configure network access policies for the cluster.

During the initial configuration of a new cluster, cluster configuration commands are issued by one system, called the control node. The control node issues the command to establish the new cluster and configures other specified systems as cluster nodes. The clauth command controls network access policies for machines configured as nodes of the new cluster. Before running clauth, add the directory /usr/cluster/bin to the default path for executables in the .profile file on node 1:

export PATH=/usr/bin:/usr/sbin
PATH=$PATH:/usr/cluster/bin
 
".profile" 27 lines, 596 characters written

Listing 18: Adding the path to the cluster software executables.

Configure the access policies on both cluster nodes. TCP wrappers for remote procedure call (RPC) must be disabled on all nodes of the cluster. The clauth command authorizes acceptance of commands from the control node, which is node 1 (sapm7adm-haapp-0101) in this deployment.

[email protected]:~# svccfg -s rpc/bind listprop config/enable_tcpwrappers
config/enable_tcpwrappers boolean     false
 
[email protected]:~# svccfg -s rpc/bind listprop config/enable_tcpwrappers
config/enable_tcpwrappers boolean     false
 
[email protected]:~# PATH=$PATH:/usr/cluster/bin
[email protected]:~# clauth enable -n sapm7adm-haapp-0101
 
[email protected]:~# svcs svc:/network/rpc/scrinstd:default 
STATE          STIME    FMRI
disabled       16:51:36 svc:/network/rpc/scrinstd:default
[email protected]:~# svcadm enable svc:/network/rpc/scrinstd:default
[email protected]:~# svcs svc:/network/rpc/scrinstd:default 
STATE          STIME    FMRI
online         17:12:11 svc:/network/rpc/scrinstd:default
 
[email protected]:~# svcs svc:/network/rpc/scrinstd:default
STATE          STIME    FMRI
online         17:10:06 svc:/network/rpc/scrinstd:default

Listing 19: Configuring cluster access policies.

Creating a Cluster Using the Oracle Solaris Cluster Manager BUI

To finish the installation, create a cluster using Oracle Solaris Cluster Manager (Figure 2), a browser-based user interface (BUI) for the software. Connect to port 8998 on the first node (in this case, to <a class="jive-link-external-small" href="https://sapm7adm-haapp-0101:8998/" rel="nofollow">https://sapm7adm-haapp-0101:8998/</a>). Currently the BUI supports configuration tasks performed only by the user root.

f2.png

Figure 2. Connecting to the Oracle Solaris Cluster Manager BUI.

The cluster creation wizard guides you through the process of creating an Oracle Solaris Cluster configuration. It gathers configuration details, displays the results of checks before installing, and then performs an Oracle Solaris Cluster installation. The same BUI is used for managing and monitoring the Oracle Solaris Cluster configuration after installation. When using the BUI to manage the configuration, the corresponding CLI commands are displayed as they are run on the nodes.

The cluster creation wizard (Figure 3) first verifies prerequisites for cluster creation. Select Typical for the Creation Mode, which works well on Oracle SuperCluster for clustered SAP environments.

f3.png

Figure 3. The Oracle Solaris Cluster wizard simplifies the process of cluster creation.

Select the cluster interconnects ic1 and ic2 (configured previously) as the local transport adapters (Figure 4).

f4.png

Figure 4. Specify the adapter interfaces for the Oracle Solaris Cluster configuration.

Next, specify the cluster name and nodes for the cluster configuration (Figure 5) and the quorum device (Figure 6). When selecting a quorum device, Oracle Solaris Cluster can detect which disk is the only direct-attached shared disk. If more than one is present, it will ask you to make a choice.

f5.png

Figure 5. Specify the nodes for the Oracle Solaris Cluster configuration.

f6.png

Figure 6. Specify the quorum configuration for Oracle Solaris Cluster.

Resource security information is displayed (Figure 7), and then the entire configuration is presented for review (Figure 8). At this point, Oracle Solaris Cluster is ready to create the cluster. If desired, select the option from the review screen to perform a cluster check before actual cluster creation.

f7.png

Figure 7. Resource security information.

f8.png

Figure 8. Review the Oracle Solaris Cluster configuration.

Figure 9 shows the results of the cluster check. Review the configuration and click Back to make changes if needed. Click the Create button to begin the actual cluster creation. Figure 10 shows the output of the cluster creation steps for the cluster sapm7-haapp-01. This step will result in the domain sapm7-haapp-01 being rebooted as a cluster node.

f9.png

Figure 9. Cluster check report.

f10.png

Figure 10. Results of the cluster creation.

Click Finish to initiate the configuration of the remaining node, which will reboot as a cluster node; at this point, it will join the other node and form the cluster.

The nodes are each rebooted to join the cluster. After the reboot, log in again to the BUI to view cluster status. Figure 11 shows status information for the created cluster sapm7-haapp-01.

f11.png

Figure 11. Oracle Solaris Cluster Manager provides status information about the created cluster.

At this point, there are no resource groups or zone clusters. More-detailed information is available using the menu options. For example, by selecting Nodes, you can drill down to see status information for each node (Figure 12). By selecting Quorum, you can also see status information for the quorum device and nodes (Figure 13).

f12.png

Figure 12. The interface can present detailed status information about cluster nodes.

f13.png

Figure 13. Quorum device information is also available.

Migrating Zones from the Source Platform

This series of articles focuses on migrating a complete SAP system to an Oracle engineered system, specifically to an Oracle SuperCluster M7. After the Oracle Solaris Cluster software is installed and the cluster has been created, zones from the source platform (in this example, a cluster of Oracle's SPARC T5-8 server nodes connected to an Oracle ZFS Storage Appliance) can be migrated to the destination platform, if desired.

The zone clusters in the source environment have the operating system (OS) configuration for the SAP system, including user accounts, home directories, SAP system resource management settings, and so forth. To minimize the risk of omitting configuration settings that are necessary for the destination SAP system to be operational, Unified Archives—a feature of the Oracle Solaris 11 operating system—can capture the source OS environment with fidelity. Because Unified Archives allow multiple system instances to be archived in a single unified file format, they provide a relatively easy method of migrating zones from an existing system to a new platform or virtual machine.

Step 1. Create Unified Archives of zone cluster nodes on the source system.

On the Oracle ZFS Storage Appliance for the source system, create Unified Archives for the zone cluster nodes (for both source system nodes, sapt58-haapp-01 and sapt58-haapp-02), as shown in Listing 20. Store them in a utility directory (/export/software/sap) mounted on the source system's storage appliance:

[email protected]:~# clzc halt pr1-haapps-zc
Waiting for zone halt commands to complete on all the nodes of the zone cluster "pr1-haapps-zc"...
[email protected]:~# clzc halt pr1-ascs-zc
Waiting for zone halt commands to complete on all the nodes of the zone cluster "pr1-ascs-zc"...
 
[email protected]:~# archiveadm create -z pr1-ascs-zc -e /export/software/sap/epr1-ascs-01.uar
Initializing Unified Archive creation resources...
Unified Archive initialized: /export/software/sap/epr1-ascs-01.uar
Logging to: /system/volatile/archive_log.119
Executing dataset discovery...
Dataset discovery complete
Preparing archive system image...
Beginning archive stream creation...
<p

Comments

Sign In or Register to comment.