10 Replies Latest reply: Apr 18, 2012 4:26 AM by 912637 RSS

    clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)

    912637
      Hi All,

      I have been trying to setup zone cluster( OS: Solaris 11 | Cluster: Solaris Cluster 4.0) . Solaris cluster installed fine but after I configured a zone cluster, I was not able to see clprivnet0 interface up in zone cluster node. Can anybody please let me know what I am doing wrong.

      ---details--
      Here gzone : stands for hostname where solaris cluster is installed.
      lzone : hostname participating in zonecluster

      Inside the zone cluster node:
      ----------------------------------------
      root@zonecluster:/etc# ifconfig -a
      lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
           inet 127.0.0.1 netmask ff000000
      sc_ipmp0:1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2
           inet 192.168.2.71 netmask ffffff00 broadcast 192.168.2.255
      lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
           inet6 ::1/128

      From the GlobalZone:
      -----------------------------
      Below o/p under " --- Solaris Resources for zonecluster --- " should have showed up the clprivnet0
      root@gzone1:~# clzonecluster show

      === Zone Clusters ===

      Zone Cluster Name: zonecluster
      zonename: zonecluster
      zonepath: /zonepool/zonefs/zonecluster
      autoboot: TRUE
      ip-type: shared
      enable_priv_net: TRUE

      --- Solaris Resources for zonecluster ---

      --- Zone Cluster Nodes for zonecluster ---

      Node Name: gzone1
      physical-host: gzone1
      hostname: lzone1

      Node Name: gzone2
      physical-host: gzone2
      hostname: lzone2

      root@gzone1:~#
      --

      Configuration of the zone :
      root@gzone1:~# clzonecluster export zonecluster
      create -b
      set zonepath=/zonepool/zonefs/zonecluster
      set brand=solaris
      set autoboot=true
      set enable_priv_net=true
      set ip-type=shared
      add attr
      set name=cluster
      set type=boolean
      set value=true
      end
      add node
      set physical-host=gzone1
      set hostname=lzone1
      add net
      set address=lzone1
      set physical=net0
      end
      end
      add node
      set physical-host=gzone2
      set hostname=lzone2
      add net
      set address=lzone2
      set physical=net0
      end
      end
      root@gzone1:~#
      =======================================

      Thank you.

      Edited by: 909634 on Apr 16, 2012 8:56 AM
        • 1. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
          807928
          I'm guessing that you might not have chosen the appropriate private networking options when you installed your cluster. Try:

          bash-3.00# cluster show-netprops

          === Private Network ===

          private_netaddr: 172.16.0.0
          private_netmask: 255.255.240.0
          max_nodes: 64
          max_privatenets: 10
          num_zoneclusters: 12

          I seem to recall that you need to set the number of zone clusters and/or private nets as this affects the subnet calculations. I may be wrong, but it's worth a shot. You will need to reboot the cluster if you make the changes via 'cluster set-netprops'.

          Also see:
          https://blogs.oracle.com/SC/entry/customizing_the_private_ip_address

          Tim
          ---
          • 2. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
            912637
            Hi Tim,

            Thanks for a quick response.

            My cluster conf can have 10 clprivnet ...The below o/p is same as you have posted.

            root@gzone1:~# cluster show-netprops

            === Private Network ===

            private_netaddr: 172.16.0.0
            private_netmask: 255.255.240.0
            max_nodes: 64
            max_privatenets: 10
            num_zoneclusters: 12

            I have also gone through the link, but from the above o/p looks like private_netmask=255.255.240.0 be ok ( Please correct me if I am wrong).

            My cluster was configured as follows ( I re-installed the Solaris Cluster again)
            scinstall -i \
            -C globalcluster \
            -F \
            -G lofi \
            -T node=gzone1,node=gzone2,authtype=sys \
            -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \
            -A trtype=dlpi,name=net1 -A trtype=dlpi,name=net2 \
            -B type=switch,name=switch1 -B type=switch,name=switch2 \
            -m endpoint=:net1,endpoint=switch1 \
            -m endpoint=:net2,endpoint=switch2
            Then added the second node to the cluster..

            After I configured the zone cluster as posted in the first post, I noticed a"Warning (which I overlooked earlier)".

            ***Warning
            root@gzone1:~# zoneadm -z zonecluster boot
            zone 'zonecluster': WARNING: no matching subnet found in netmasks(4); using default.

            Now is this have to do something with clprivnet0 not coming up or not related at all.
            I saw mention of this warning in documenatation: http://docs.oracle.com/cd/E19963-01/html/821-1460/gdepn.html and on google :http://houdini68.blogspot.in/2009/02/warning-e1000g02-no-matching-subnet.html

            I tried the same as suggested in google by adding , 176.16.0.0 255.255.240.0 to /etc/inet/netmasks in globalzone.But I still get the warning message.

            ======Details from global zone for refrence====
            root@gzone1:/etc/inet# ifconfig -a
            lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
            lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
            zone zonecluster
            inet 127.0.0.1 netmask ff000000
            sc_ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2
            inet 192.168.2.70 netmask ffffff00 broadcast 192.168.2.255
            groupname sc_ipmp0
            sc_ipmp0:1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2
            zone zonecluster
            inet 192.168.2.71 netmask ffffff00 broadcast 192.168.2.255
            net0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
            inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
            groupname sc_ipmp0
            ether 0:c:29:55:5b:31
            net1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 0:c:29:55:5b:3b
            net2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 0:c:29:55:5b:45
            clprivnet0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
            inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255
            ether 0:0:0:0:0:1
            lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
            inet6 ::1/128
            lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
            zone zonecluster
            inet6 ::1/128
            sc_ipmp0: flags=28002000840<RUNNING,MULTICAST,IPv6,IPMP> mtu 1500 index 2
            inet6 ::/0
            groupname sc_ipmp0
            net0: flags=20002000841<UP,RUNNING,MULTICAST,IPv6> mtu 1500 index 3
            inet6 fe80::20c:29ff:fe55:5b31/10
            groupname sc_ipmp0
            ether 0:c:29:55:5b:31
            root@gzone1:/etc/inet#

            root@gzone1:/etc# grep -i netmasks nsswitch.conf
            netmasks: cluster files
            root@gzone1:/etc#
            ===

            Still no luck :(
            Thank you.
            • 3. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
              912637
              Hi Tim,

              The below warning is fixed after I added range in zone conf for the set address=192.168.2.71/24 & 192.168.2.81/24 .Now I do not get the warning when I boot the zone.
              ***Warning ( ISSUE FIXED)
              root@gzone1:~# zoneadm -z zonecluster boot
              zone 'zonecluster': WARNING: no matching subnet found in netmasks(4); using default.

              But clprivnet0 not showing up in zone cluster.(ISSUE OPEN). Hope you can give me some direction to fix this.

              Thank you.
              • 4. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                807928
                It looks like you are doing all the right things. Is your zone cluster booting fully? Are all the services running? I have had cases where a repository wasn't reachable that caused the zone to only partially boot. So for example, can you run any cluster commands in the zone cluster? If not, then that will be a likely cause for clprivnet not being present.

                Tim
                ---
                • 5. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                  912637
                  Hi Tim,

                  I am able to run zoneadm commands show below shows zone is running .....not sure whts wrong!!.. clprivnet0 is supposed to come up by itself when zone cluster is created rgt ?

                  root@gzone1:~# zoneadm list -cv
                  ID NAME STATUS PATH BRAND IP
                  0 global running / solaris shared
                  1 zonecluster running /zonepool/zonefs/zonecluster solaris shared
                  root@gzone1:~# zlogin zonecluster
                  [Connected to zone 'zonecluster' pts/2]
                  Oracle Corporation SunOS 5.11 11.0 November 2011
                  root@zonecluster:~# zoneadm list -cv
                  ID NAME STATUS PATH BRAND IP
                  1 zonecluster running / solaris shared
                  root@zonecluster:~#

                  root@gzone1:~# zlogin zonecluster ipadm show-addr
                  ADDROBJ TYPE STATE ADDR
                  lo0/? from-gz ok 127.0.0.1/8
                  sc_ipmp0/? from-gz ok 192.168.2.71/24
                  lo0/? from-gz ok ::1/128
                  root@gzone1:~#

                  root@gzone1:~# cluster show

                  === Cluster ===

                  Cluster Name: globalcluster
                  clusterid: 0x4F8CDD5C
                  installmode: disabled
                  heartbeat_timeout: 10000
                  heartbeat_quantum: 1000
                  private_netaddr: 172.16.0.0
                  private_netmask: 255.255.240.0
                  max_nodes: 64
                  max_privatenets: 10
                  num_zoneclusters: 12
                  udp_session_timeout: 480
                  concentrate_load: False
                  global_fencing: prefer3
                  Node List: gzone1, gzone2

                  === Host Access Control ===

                  Cluster name: globalcluster
                  Allowed hosts: None
                  Authentication Protocol: sys

                  === Cluster Nodes ===

                  Node Name: gzone1
                  Node ID: 1
                  Enabled: yes
                  privatehostname: clusternode1-priv
                  reboot_on_path_failure: disabled
                  globalzoneshares: 1
                  defaultpsetmin: 1
                  quorum_vote: 1
                  quorum_defaultvote: 1
                  quorum_resv_key: 0x4F8CDD5C00000001
                  Transport Adapter List: net1, net2

                  Node Name: gzone2
                  Node ID: 2
                  Enabled: yes
                  privatehostname: clusternode2-priv
                  reboot_on_path_failure: disabled
                  globalzoneshares: 1
                  defaultpsetmin: 1
                  quorum_vote: 1
                  quorum_defaultvote: 1
                  quorum_resv_key: 0x4F8CDD5C00000002
                  Transport Adapter List: net1, net2

                  === Transport Cables ===

                  Transport Cable: gzone1:net1,switch1@1
                  Endpoint1: gzone1:net1
                  Endpoint2: switch1@1
                  State: Enabled

                  Transport Cable: gzone1:net2,switch2@1
                  Endpoint1: gzone1:net2
                  Endpoint2: switch2@1
                  State: Enabled

                  Transport Cable: gzone2:net1,switch1@2
                  Endpoint1: gzone2:net1
                  Endpoint2: switch1@2
                  State: Enabled

                  Transport Cable: gzone2:net2,switch2@2
                  Endpoint1: gzone2:net2
                  Endpoint2: switch2@2
                  State: Enabled

                  === Transport Switches ===

                  Transport Switch: switch1
                  State: Enabled
                  Type: switch
                  Port Names: 1 2
                  Port State(1): Enabled
                  Port State(2): Enabled

                  Transport Switch: switch2
                  State: Enabled
                  Type: switch
                  Port Names: 1 2
                  Port State(1): Enabled
                  Port State(2): Enabled

                  === Quorum Devices ===

                  Quorum Device Name: quorum_vmutil_priv
                  Enabled: yes
                  Votes: 1
                  Global Name: quorum_vmutil_priv
                  Type: quorum_server
                  Hosts (enabled): gzone1, gzone2
                  Quorum Server Host: 192.168.2.51
                  Port: 9000

                  === Device Groups ===

                  === Registered Resource Types ===

                  Resource Type: SUNW.LogicalHostname:4
                  RT_description: Logical Hostname Resource Type
                  RT_version: 4
                  API_version: 2
                  RT_basedir: /usr/cluster/lib/rgm/rt/hafoip
                  Single_instance: False
                  Proxy: False
                  Init_nodes: All potential masters
                  Installed_nodes: <All>
                  Failover: True
                  Pkglist: <NULL>
                  RT_system: True
                  Global_zone: True

                  Resource Type: SUNW.SharedAddress:2
                  RT_description: HA Shared Address Resource Type
                  RT_version: 2
                  API_version: 2
                  RT_basedir: /usr/cluster/lib/rgm/rt/hascip
                  Single_instance: False
                  Proxy: False
                  Init_nodes: <Unknown>
                  Installed_nodes: <All>
                  Failover: True
                  Pkglist: <NULL>
                  RT_system: True
                  Global_zone: True

                  === Resource Groups and Resources ===

                  === DID Device Instances ===

                  DID Device Name: /dev/did/rdsk/d1
                  Full Device Path: gzone2:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B70001d0
                  Full Device Path: gzone1:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B70001d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d2
                  Full Device Path: gzone2:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B80002d0
                  Full Device Path: gzone1:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B80002d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d3
                  Full Device Path: gzone2:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B80003d0
                  Full Device Path: gzone1:/dev/rdsk/c0t600144F0AE0E810000004F7FD7B80003d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d4
                  Full Device Path: gzone1:/dev/rdsk/c3t0d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d5
                  Full Device Path: gzone1:/dev/rdsk/c4t0d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d6
                  Full Device Path: gzone2:/dev/rdsk/c3t0d0
                  Replication: none
                  default_fencing: global

                  DID Device Name: /dev/did/rdsk/d7
                  Full Device Path: gzone2:/dev/rdsk/c4t0d0
                  Replication: none
                  default_fencing: global

                  === NAS Devices ===

                  === Zone Clusters ===

                  Zone Cluster Name: zonecluster
                  zonename: zonecluster
                  zonepath: /zonepool/zonefs/zonecluster
                  autoboot: TRUE
                  ip-type: shared
                  enable_priv_net: TRUE

                  --- Solaris Resources for zonecluster ---

                  --- Zone Cluster Nodes for zonecluster ---

                  Node Name: gzone1
                  physical-host: gzone1
                  hostname: lzone1

                  Node Name: gzone2
                  physical-host: gzone2
                  hostname: lzone2

                  root@gzone1:~#

                  Also able to execute zonestat commands inside zonecluster node.

                  Thank you.
                  • 6. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                    807928
                    OK, first, did you create the zone cluster using the clzc (clzonecluster) command rather than zonecfg? If not, you need to start again as clzc is the only supported method. Second, if you 'zlogin -C zonecluster' straight after you have issued 'clzc boot zonecluster' are there any obvious error messages. Then when inside the zone cluster, does 'svcs -x' show that the zone cluster has booted properly? Finally, if it has, then clprivnet might be clprivnet3 (i.e. not 0).

                    Regards,

                    Tim
                    ---
                    • 7. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                      912637
                      Hi Tim,

                      As per your advice have done the following...

                      root@gzone1:~# clzc configure -f zone.cfg zonecluster
                      root@gzone1:~# clzc verify zonecluster
                      Waiting for zone verify commands to complete on all the nodes of the zone cluster "zonecluster"...
                      root@gzone1:~# clzc status zonecluster

                      === Zone Clusters ===

                      --- Zone Cluster Status ---

                      Name Node Name Zone HostName Status Zone Status
                      ---- --------- ------------- ------ -----------
                      zonecluster gzone1 lzone1 Offline Configured
                      gzone2 lzone2 Offline Configured

                      root@gzone1:~# svcs -x [ ALL OK ]
                      root@gzone1:~#

                      After this for installing the zone :
                      I had used zoneadm to install the zonme all this while on each node individually.
                      root@gzone1:~# zoneadm -z zonecluster install
                      A ZFS file system has been created for this zone.
                      Progress being logged to /var/log/zones/zoneadm.20120417T125518Z.zonecluster.install
                      Image: Preparing at /zonepool/zonefs/zonecluster/root.

                      Install Log: /system/volatile/install.6443/install_log
                      AI Manifest: /tmp/manifest.xml.zHaiKm
                      SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
                      Zonename: zonecluster
                      Installation: Starting ...

                      Creating IPS image
                      Installing packages from:
                      solaris
                      origin: http://pkg.oracle.com/solaris/release/
                      DOWNLOAD PKGS FILES XFER (MB)
                      Completed 167/167 32062/32062 175.8/175.8

                      PHASE ACTIONS
                      Install Phase 44313/44313

                      PHASE ITEMS
                      Package State Update Phase 167/167
                      Image State Update Phase 2/2
                      Installation: Succeeded

                      Note: Man pages can be obtained by installing pkg:/system/manual

                      done.

                      Done: Installation completed in 157.614 seconds.


                      Next Steps: Boot the zone, then log into the zone console (zlogin -C)

                      to complete the configuration process.

                      Log saved in non-global zone as /zonepool/zonefs/zonecluster/root/var/log/zones/zoneadm.20120417T125518Z.zonecluster.install
                      root@gzone1:~#

                      But When I used clzc it error ed as below:
                      root@gzone1:~# clzc install zonecluster
                      Waiting for zone install commands to complete on all the nodes of the zone cluster "zonecluster"...
                      clzc: (C801046) Command execution failed on node gzone1. Please refer to the console for more information
                      clzc: (C801046) Command execution failed on node gzone2. Please refer to the console for more information

                      But I couldn't get any information abt the above error *(C801046)*

                      Then I continued with zoneadm to install the zonecluster on both gzone1&2.

                      root@gzone1:~# clzc boot zonecluster
                      Waiting for zone boot commands to complete on all the nodes of the zone cluster "zonecluster"...
                      root@gzone1:~# clzc status zonecluster

                      === Zone Clusters ===

                      --- Zone Cluster Status ---

                      Name Node Name Zone HostName Status Zone Status
                      ---- --------- ------------- ------ -----------
                      zonecluster gzone1 lzone1 Offline Running
                      gzone2 lzone2 Offline Running

                      root@gzone1:~#

                      * Above status still says offline * does that mean zone has not completely booted
                      Logged in : zlogin -C zonecluster
                      Noerrors, got the hostname screen, followed by DNS configuration.....finally with user creation.
                      Once inside the lzone1 & lzone2 : svcs -xv returned nothing.

                      I think I am doing something fundamentally wrong, I am DBA trying to learn this to run a 11gR2 RAC in zonecluster. Please correct me or guide me if I am wrong.

                      Does zoneadm & clzc supposed to create different types of zones.

                      Still no sign of clprivnet
                      raag@lzone2:~$ ifconfig -a
                      lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
                      inet 127.0.0.1 netmask ff000000
                      sc_ipmp0:1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 1500 index 2
                      inet 192.168.2.81 netmask ffffff00 broadcast 192.168.2.255
                      lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
                      inet6 ::1/128

                      Thank you

                      Edited by: 909634 on Apr 17, 2012 6:40 AM
                      • 8. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                        807928
                        Unfortunately, you still haven't quite got the procedure right. Please check the documentation:

                        http://docs.oracle.com/cd/E23623_01/html/E23437/gjbcb.html#scrolltoc

                        I can also recommend a good book with some step-by-step examples - "Oracle Solaris Cluster Essentials" :-)
                        Although it doesn't cover OSC 4.0, it is still very relevant. I use it as a crib sheet for stuff I can't remember.

                        Anyway, to your problem.

                        Roughly speaking, you need to:
                        # clzc configure zonecluster
                        # clzc install zonecluster
                        # clzc boot zonecluster
                        # zlogin -C zonecluster

                        You never use zoneadm or zonecfg directly.

                        Regards,

                        Tim
                        ---
                        • 9. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                          912637
                          Will certainly go through the book and run through documentation again.

                          Thanks for your time Tim.

                          Thank you.
                          • 10. Re: clprivnet0 not showing up in zone cluster (solaris 11 /Solaris Cluster 4.0)
                            912637
                            Thanks Tim, clprivnet came up in zone once I used clzc to install.
                            /system/volatile/install.<processID>/install.log was useful in torublkeshooting the error "clzc: (C801046)".

                            Thanks again.