1 Reply Latest reply on Feb 8, 2013 10:17 AM by Shiyer-Oracle

    HOWTO: Create 2-node Solaris Cluster 4.1/Solaris 11.1(x64) using VirtualBox

      I did this on VirtualBox 4.1 on Windows 7 and VirtualBox 4.2 on Linux.X64. Basic pre-requisites are : 40GB disk space, 8GB RAM, 64-bit guest capable VirtualBox.

      Please read all the descriptive messages/prompts shown by 'scinstall' and 'clsetup' before answering.

      0) Download from OTN
      - Solaris 11.1 Live Media for x86(~966 MB)
      - Complete Solaris 11.1 IPS Repository Image (total 7GB)
      - Oracle Solaris Cluster 4.1 IPS Repository image (~73MB)

      1) Run VirtualBox Console, create VM1 : 3GB RAM, 30GB HDD

      2) The new VM1 has 1 NIC, add 2 more NICs (total 3). Setting the NIC to any type should be okay, 'VirtualBox Host Only Adapter' worked fine for me.

      3) Start VM1, point the "Select start-up disk" to the Solaris 11.1 Live Media ISO.

      4) Select "Oracle Solaris 11.1" in the GRUB menu. Select Keyboard layout and Language.
      VM1 will boot and the Solaris 11.1 Live Desktop screen will appear.

      5) Click <Install Oracle Solaris> from the desktop, supply necessary inputs.
      Default Disk Discovery (iSCSI not needed) and Disk Selection are fine.
      Disable the "Support Registration" connection info

      6) The alternate user created during the install has root privileges (sudo). Set appropriate VM1 name

      7) When the VM has to be rebooted after the installation is complete, make sure the Solaris 11.1 Live ISO is ejected or else the VM will again boot from the Live CD.

      8) Repeat steps 1-6, create VM2 and install Solaris.

      9) FTP(secure) the Solaris 11.1 Repository IPS and Solaris Cluster 4.1 IPS onto both the VMs e.g under /home/user1/

      10) We need to setup both the packages: Solaris 11.1 Repository and Solaris Cluster 4.1

      11) All commands now to be run as root

      12) By default the 'solaris' repository is of type online (pkg.oracle.com), that needs to be updated to the local ISO we downloaded :-

      +$ sudo sh+
      +# lofiadm -a /home/user1/sol-11_1-repo-full.iso+
      +//output : /dev/lofi/N+
      +# mount -F hsfs /dev/lofi/N /mnt+
      +# pkg set-publisher -G '*' -M '*' -g /mnt/repo solaris+

      13) Setup the ha-cluster package :-

      +# lofiadm -a /home/user1/osc-4_1-ga-repo-full.iso+
      +//output : /dev/lofi/N+
      +# mkdir /mnt2+
      +# mount -f hsfs /dev/lofi/N /mnt2+
      +# pkg set-publisher -g file:///mnt2/repo ha-cluster+

      14) Verify both packages are fine :-

      +# pkg publisher+

      PUBLISHER                   TYPE     STATUS P LOCATION
      solaris                     origin   online F file:///mnt/repo/
      ha-cluster                  origin   online F file:///mnt2/repo/

      15) Install the complete SC4.1 package by installing 'ha-cluster-full'
      +# pkg install ha-cluster-full+

      14) Repeat steps 12-15 on VM2.

      15) Now both VMs have the OS and SC4.1 installed.

      16) By default the 3 NICs are in the "Automatic" profile and have DHCP configured. We need to activate the Fixed profile and put the 3 NICs into it. Only 1 interface, the public interface, needs to be

      configured. The other 2 are for the cluster interconnect and will be automatically configured by scinstall. Execute the following commands :-

      +# netadm enable -p ncp defaultfixed+
      +# netadm list -p ncp defaultfixed+

      +#Configure the public-interface+

      +#Verify none of the interfaces are listed, add all the 3+
      +# ipadm show-if+

      +# run dladm show-phys or dladm show-link to check interface names : must be net0/net1/net2+

      +# ipadm create-ip net0+
      +# ipadm create-ip net1+
      +# ipadm create-ip net2+

      +# ipadm show-if+

      +//select proper IP and configure the public interface. I have used & 172+
      +# ipadm create-addr -T static -a net0/publicip+

      +#IP plumbed, restart+
      +# ipadm down-addr -t net0/publicip+
      +# ipadm up-addr -t net0/publicip+

      +//Verify publicip is fine by pinging the host+
      +# ping

      +//Verify, net0 should be up, net1/net2 should be down+
      +# ipadm+

      17) Repeat step 16 on VM2

      18) Verify both VMs can ping each other using the public IP. Add entries to each other's /etc/hosts

      Now we are ready to run scinstall and create/configure the 2-node cluster

      +# cd /usr/cluster/bin+
      +# ./scinstall+
      select 1) Create a new cluster ...
      select 1) Create a new cluster
      select 2) Custom in "Typical or Custom Mode"
      Enter cluster name : mycluster1 (e.g)
      Add the 2 nodes : solvm1 & solvm2 and press <ctrl-d>
      Accept default "No" for <Do you need to use DES authentication>"
      Accept default "Yes" for <Should this cluster use at least two private networks>
      Enter "No" for <Does this two-node cluster use switches>
      Select "1)net1" for "Select the first cluster transport adapter"
      If there is warning of unexpected traffic on "net"1, ignore it
      Enter "net1" when it asks corresponding adapter on "solvm2"
      Select "2)net2" for "Select the second cluster transport adapter"
      Enter "net2" when it asks corresponding adapter on "solvm2"
      Select "Yes" for "Is it okay to accept the default network address"
      Select "Yes" for "Is it okay to accept the default network netmask"
      Now the IP addresses will be plumbed in the 2 private interfaces
      Select "yes" for "Do you want to turn off global fencing"
      (These are SATA serial disks, so no fencing)
      Enter "Yes" for "Do you want to disable automatic quorum device selection"
      (we will add quorum disks later)
      Enter "Yes" for "Proceed with cluster creation"
      Select "No" for "Interrupt cluster creation for cluster check errors"
      The second node will be configured and 2nd node rebooted
      The first node will be configured and rebooted
      After both nodes have rebooted, verify the cluster has been created and both nodes joined.
      On both nodes :-
      +# cd /usr/cluster/bin+
      +# ./clnode status+
      +//should show both nodes Online.+

      At this point there are no quorum disks, so 1 of the node's will be designated quorum vote. That node VM has to be up for the other node to come up and cluster to be formed.
      To check the current quorum status, run :-

      +# ./clquorum show+
      +//one of the nodes will have 1 vote and other 0(zero).+

      Now the cluster is in 'Installation Mode' and we need to add a quorum disk.

      Shutdown both the nodes as we will be adding shared disks to both of them

      Create 2 VirtualBox HDDs (VDI Files) on the host, 1 for quorum and 1 for shared filesystem. I have used a size of 1 GB for each :-

      *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk1.vdi --size 1024 --format VDI --variant Fixed*
      *Disk image created. UUID: 899147b9-d21f-4495-ad55-f9cf1ae46cc3*

      *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk2.vdi --size 1024 --format VDI --variant Fixed*
      *Disk image created. UUID: 899147b9-d22f-4495-ad55-f9cf15346caf*

      Attach these disks to both the VMs as shared type

      *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
      *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*
      *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
      *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*

      The disks are attached to SATA ports 1 & 2 of each VM. On my VirtualBox on Linux, the controller type is "SATA", whereas on Windows it is "SATA Controller".

      The "--mtype shareable' parameter is important

      Mark both disks as shared :-

      *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk1.vdi --type shareable*
      *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk2.vdi --type shareable*

      24) Start both VMs. We need to format the 2 shared disks

      25) From VM1, run format. In my case, the 2 new shared disks show up as 'c7t1d0' and 'c7t2d0'.

      +# format+
      select disk 1 (c7t1d0)
      [disk formated]
      Type 'y' to accept default partition
      26) Repeat step 25) for the 2nd disk (c7t2d0)

      27) Make sure the shared disks can be used for quorum :-

      On VM1
      +# ./cldevice refresh+
      +# ./cldevice show+

      On VM2
      +# ./cldevice refresh+
      +# ./cldevice show+

      The shared disks should have the same DID (d2,d3,d4 etc). Note down the DID that you are going to use for quorum (e.g d2)

      By default, global fencing is enabled for these disks. We need to turn it off for all disks as these are SATA disks :-

      +# cldevice set -p default_fencing=nofencing-noscrub d1+
      +# cldevice set -p default_fencing=nofencing-noscrub d2+
      +# cldevice set -p default_fencing=nofencing-noscrub d3+
      +# cldevice set -p default_fencing=nofencing-noscrub d4+

      28) It is better to do one more reboot of both VMs, otherwise I got a error when adding the quorum disk

      29) Run clsetup to add quorum disk and to complete cluster configuration :-

      +# ./clsetup+
      === Initial Cluster Setup ===
      Enter 'Yes' for "Do you want to continue"
      Enter 'Yes' for "Do you want add any quorum devices"
      Select '1) Directly Attached Shared Disk' for the type of device
      Enter 'Yes' for "Is it okay to continue"
      Enter 'd2' (or 'd3') for 'Which global device do you want to use'
      Enter 'Yes' for "Is it okay to proceed with the update"
      The command 'clquorum add d2' is run
      Enter 'No' for "Do you want to add another quorum device"
      Enter 'Yes' for "Is it okay to reset "installmode"?"
      Cluster initialization is complete.!!!

      30) Run 'clquorum status' to confirm both nodes and the quorum disk have 1 vote each
      31) Run other cluster commands to explore!

      I will cover Data services and shared file system in another post. Basically the other shared disk
      can be used to create a UFS filesystem and mount it on all nodes.