3 Replies Latest reply: Aug 1, 2012 1:06 PM by 871253 RSS

    Oracle 11gR2 RAC in LDOM Network issue

    871253
      Hi, Requesting your expert advise regarding this configuration.

      We are implementing LDOM 2.2 on two SPARC T4-4 for Oracle 11gR2 RAC; Solaris 10 U10 on both control and guest domain. The setup for each primary/control domain is: Two 10g links aggregated and have four VLAN trunked on the aggregate. vSwitch created using the aggr as the device as following per T4-4:

      NOTE: VLAN 1501 is for data connection and VLAN 10 is for heartbeat for one RAC cluster and VL 1601 and 11 is for another RAC. all together four LDOMS.

      ldm add-vswitch vid=1501,1601,10,11 net-dev=aggr1 primary-vsw0 primary

      ldm add-vnet pvid=1501 vnetprod primary-vsw0 guest1
      ldm add-vnet pvid=10 vnethb primary-vsw0 guest1

      ldm add-vnet pvid=1601 vnetprod primary-vsw0 guest2
      ldm add-vnet pvid=11 vnethb primary-vsw0 guest2

      vnet inside the LDOM are not tagged:
      vnet1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
      inet 10.220.128.20 netmask ffffff80 broadcast 10.220.128.127
      ether 0:14:4f:f9:ec:7f
      vnet2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
      inet 192.168.2.11 netmask ffffff80 broadcast 192.168.2.127
      ether 0:14:4f:fb:2b:8f

      Here is the whole configuration:

      root@gp-cpu-suh004 # ldm -V

      Logical Domains Manager (v 2.2.0.0)
      Hypervisor control protocol v 1.9
      Using Hypervisor MD v 1.4

      System PROM:
      Hostconfig v. 1.2.0. @(#)Hostconfig 1.2.0.a 2012/05/11 07:34
      Hypervisor v. 1.11.0. @(#)Hypervisor 1.11.0.a 2012/05/11 05:28
      OpenBoot v. 4.34.0 @(#)OpenBoot 4.34.0 2012/04/30 14:26
      root@gp-cpu-suh004 # ldm ls
      NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
      primary active -n-cv- UART 32 16G 2.5% 48m
      oidrac1 active -n---- 5000 32 16G 0.0% 27m
      root@gp-cpu-suh004 # ldm ls -l
      NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
      primary active -n-cv- UART 32 16G 3.1% 48m

      SOFTSTATE
      Solaris running

      UUID
      e73421fe-7003-e748-be7e-801fee5bfcc7

      MAC
      00:21:28:f1:95:26

      HOSTID
      0x85f19526

      CONTROL
      failure-policy=ignore
      extended-mapin-space=off
      cpu-arch=native

      DEPENDENCY
      master=

      CORE
      .......

      VCPU
      ......

      MEMORY
      RA PA SIZE
      0x20000000 0x20000000 16G

      CONSTRAINT
      threading=max-throughput

      VARIABLES
      auto-boot-on-error?=true
      auto-boot?=true
      boot-device=/pci@400/pci@1/pci@0/pci@0/LSI,sas@0/disk@w5000cca0251e7a29,0:a
      keyboard-layout=US-English
      nvramrc=." ChassisSerialNumber 1207BDYFFE " cr
      use-nvramrc?=true

      IO
      DEVICE PSEUDONYM OPTIONS
      ......
      .....

      VCC
      NAME PORT-RANGE
      primary-vcc0 5000-5100

      VSW
      NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      primary-vsw-mgmt 00:14:4f:fb:75:c0 igb1 0 switch@0 1 1 1500 on
      primary-vsw0 00:14:4f:fa:33:8b aggr1 1 switch@1 1 1 1501,1601,10,11 1500 on

      VDS
      NAME VOLUME OPTIONS MPGROUP DEVICE
      primary-vds0 rootoid /dev/dsk/c14t50060E8005BFAA04d1s2
      data_oid /dev/dsk/c14t50060E8005BFAA04d2s2
      ocr_oid /dev/dsk/c14t50060E8005BFAA04d3s2

      VCONS
      NAME SERVICE PORT
      UART

      ------------------------------------------------------------------------------
      NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
      oidrac1 active -n---- 5000 32 16G 0.0% 27m

      SOFTSTATE
      Solaris running

      UUID
      0fcbbf21-14a2-eb21-f544-d4424212f3ef

      MAC
      00:14:4f:f9:1b:d4

      HOSTID
      0x84f91bd4

      CONTROL
      failure-policy=ignore
      extended-mapin-space=off
      cpu-arch=native

      DEPENDENCY
      master=

      CORE
      CID CPUSET
      ......

      VCPU
      VID PID CID UTIL STRAND
      ........

      MEMORY
      RA PA SIZE
      0x20000000 0x420000000 16G

      CONSTRAINT
      threading=max-throughput

      VARIABLES
      auto-boot?=true
      boot-device=disk:a
      keyboard-layout=US-English

      NETWORK
      NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
      vnet1 primary-vsw-mgmt@primary 0 network@0 00:14:4f:fa:61:77 1 1500
      vnetprod primary-vsw0@primary 1 network@1 00:14:4f:f9:ec:7f 1501 1500
      vnethb primary-vsw0@primary 2 network@2 00:14:4f:fb:2b:8f 10 1500

      DISK
      NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
      oneidrootdisk rootoid@primary-vds0 0 disk@0 primary
      oid_data data_oid@primary-vds0 1 disk@1 primary
      oid_ocr ocr_oid@primary-vds0 2 disk@2 primary

      VCONS
      NAME SERVICE PORT
      oidrac1 primary-vcc0@primary 5000




      root@gp-cpu-suh004 # ldm ls-services
      VCC
      NAME LDOM PORT-RANGE
      primary-vcc0 primary 5000-5100

      VSW
      NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      primary-vsw-mgmt primary 00:14:4f:fb:75:c0 igb1 0 switch@0 1 1 1500 on
      primary-vsw0 primary 00:14:4f:fa:33:8b aggr1 1 switch@1 1 1 1501,1601,10,11 1500 on

      VDS
      NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
      primary-vds0 primary rootoid /dev/dsk/c14t50060E8005BFAA04d1s2
      data_oid /dev/dsk/c14t50060E8005BFAA04d2s2
      ocr_oid /dev/dsk/c14t50060E8005BFAA04d3s2

      root@gp-cpu-suh004 # dladm show-link
      vsw0 type: non-vlan mtu: 1500 device: vsw0
      vsw1 type: non-vlan mtu: 1500 device: vsw1
      vsw1501001 type: vlan 1501 mtu: 1500 device: vsw1
      igb0 type: non-vlan mtu: 1500 device: igb0
      igb1 type: non-vlan mtu: 1500 device: igb1
      qlge0 type: non-vlan mtu: 1500 device: qlge0
      qlge1 type: non-vlan mtu: 1500 device: qlge1
      qlge2 type: non-vlan mtu: 1500 device: qlge2
      qlge3 type: non-vlan mtu: 1500 device: qlge3
      aggr1 type: non-vlan mtu: 1500 aggregation: key 1

      root@gp-cpu-suh004 # ifconfig -a
      lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
      inet 127.0.0.1 netmask ff000000
      igb0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
      inet 10.223.12.14 netmask ffffff00 broadcast 10.223.12.255
      ether 0:21:28:f1:95:26
      vsw1501001: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
      inet 10.220.128.9 netmask ffffff80 broadcast 10.220.128.127
      ether 0:14:4f:fa:33:8b
      root@gp-cpu-suh004 # netstat -nr

      Routing Table: IPv4
      Destination Gateway Flags Ref Use Interface
      -------------------- -------------------- ----- ----- ---------- ---------
      default 10.220.128.1 UG 1 7
      10.220.128.0 10.220.128.9 U 1 5 vsw1501001
      10.223.0.0 10.223.12.1 UG 1 2
      10.223.12.0 10.223.12.14 U 1 1 igb0
      224.0.0.0 10.220.128.9 U 1 0 vsw1501001
      127.0.0.1 127.0.0.1 UH 8 261 lo0
        • 1. Re: Oracle 11gR2 RAC in LDOM Network issue
          871253
          Sorry, the main reason I posted this question is because the LDOMs have connectivity issues between them on the SAME VLAN/subnet. RAC pre-install check errors out with "specified network interface doesn't maintain connectivity across cluster nodes". Our Switch people doesn't see any issues on the switch side and they are pointing to LDOM vSwitch.

          link aggregation is setup with LACP active with L3 policy. On the control domain plumbing the vsw has no connectivity issues, only in guests. Thanks.
          • 2. Re: Oracle 11gR2 RAC in LDOM Network issue
            user220123
            Can you connect to the vsw0 interface on the control domain? I noticed that you haven't assigned a PVID on to the vswitch itself. I had connectivity issues when I set up my vswitches without assigning a pvid.

            Also, could you post the specific configurations of vnets for all members of each cluster? I noticed that you assigned different pvids for guest1 and guest2 respectively. What about the other members of each cluster (per your note guest1 and guest2 participate in two separate rac clusters)
            • 3. Re: Oracle 11gR2 RAC in LDOM Network issue
              871253
              Yes, I can connect to the vswitch interface on the control domain. I didn't specify any PVID because my understanding is that PVID will tag any frame with the PVID VLAN by default. basically the PVID for this interface is 1.




              Here's the VNET config for the other LDOM in the RAC cluster:

              VSW
              NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
              primary-vsw-mgmt 00:14:4f:f9:91:fa igb1 0 switch@0 1 1 1500 on
              primary-vsw0 00:14:4f:fa:8e:cf aggr1 1 switch@1 1 1 1501,1601,10,11 1500 on


              NETWORK
              NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
              vnet1 primary-vsw-mgmt 0 00:14:4f:fb:65:6d 1
              vnetprod primary-vsw0 1 00:14:4f:fa:2b:02 1501
              vnethb primary-vsw0 2 00:14:4f:f8:12:c1 10

              Thanks for reviewing my configuration.