0 Replies Latest reply: Dec 25, 2012 3:02 AM by 963941 RSS

    LDOM SUN Cluster Interconnect failure

    963941
      I am making a test SUN-Cluster on Solaris 10 in LDOM 1.3.

      in my environment, i have T5120, i have setup two guest OS with some configurations, setup sun cluster software, when executed, scinstall, it failed.

      node 2 come up, but node 1 throws following messgaes:
      #################################################################
      Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
      SunOS Release 5.10 Version Generic_139555-08 64-bit
      Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
      Use is subject to license terms.
      Hostname: test1
      Configuring devices.
      Loading smf(5) service descriptions: 37/37
      /usr/cluster/bin/scdidadm: Could not load DID instance list.
      /usr/cluster/bin/scdidadm: Cannot open /etc/cluster/ccr/did_instances.
      Booting as part of a cluster
      NOTICE: CMM: Node test2 (nodeid = 1) with votecount = 1 added.
      NOTICE: CMM: Node test1 (nodeid = 2) with votecount = 0 added.
      NOTICE: clcomm: Adapter vnet2 constructed
      NOTICE: clcomm: Adapter vnet1 constructed
      NOTICE: CMM: Node test1: attempting to join cluster.
      NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
      NOTICE: clcomm: Path test1:vnet1 - test2:vnet1 errors during initiation
      NOTICE: clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
      WARNING: Path test1:vnet1 - test2:vnet1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
      WARNING: Path test1:vnet2 - test2:vnet2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
      clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
      ##################################################################################
      CREATED VIRTUAL SWITCH AND VNETS ON PRIMARY DOMAIN LIKE:<>
      532 ldm add-vsw mode=sc cluster-vsw0 primary
      533 ldm add-vsw mode=sc cluster-vsw1 primary
      535 ldm add-vnet vnet2 cluster-vsw0 test1
      536 ldm add-vnet vnet3 cluster-vsw1 test1
      540 ldm add-vnet vnet2 cluster-vsw0 test2
      541 ldm add-vnet vnet3 cluster-vsw1 test2

      Primary DOmain<>
      bash-3.00# dladm show-dev
      vsw0 link: up speed: 1000 Mbps duplex: full
      vsw1 link: up speed: 0 Mbps duplex: unknown
      vsw2 link: up speed: 0 Mbps duplex: unknown
      e1000g0 link: up speed: 1000 Mbps duplex: full
      e1000g1 link: down speed: 0 Mbps duplex: half
      e1000g2 link: down speed: 0 Mbps duplex: half
      e1000g3 link: up speed: 1000 Mbps duplex: full
      bash-3.00# dladm show-link
      vsw0 type: non-vlan mtu: 1500 device: vsw0
      vsw1 type: non-vlan mtu: 1500 device: vsw1
      vsw2 type: non-vlan mtu: 1500 device: vsw2
      e1000g0 type: non-vlan mtu: 1500 device: e1000g0
      e1000g1 type: non-vlan mtu: 1500 device: e1000g1
      e1000g2 type: non-vlan mtu: 1500 device: e1000g2
      e1000g3 type: non-vlan mtu: 1500 device: e1000g3
      bash-3.00#

      NOde1<>
      -bash-3.00# dladm show-link
      vnet0 type: non-vlan mtu: 1500 device: vnet0
      vnet1 type: non-vlan mtu: 1500 device: vnet1
      vnet2 type: non-vlan mtu: 1500 device: vnet2
      -bash-3.00# dladm show-dev
      vnet0 link: unknown speed: 0 Mbps duplex: unknown
      vnet1 link: unknown speed: 0 Mbps duplex: unknown
      vnet2 link: unknown speed: 0 Mbps duplex: unknown
      -bash-3.00#
      NODE2<>
      -bash-3.00# dladm show-link
      vnet0 type: non-vlan mtu: 1500 device: vnet0
      vnet1 type: non-vlan mtu: 1500 device: vnet1
      vnet2 type: non-vlan mtu: 1500 device: vnet2
      -bash-3.00#
      -bash-3.00#
      -bash-3.00# dladm show-dev
      vnet0 link: unknown speed: 0 Mbps duplex: unknown
      vnet1 link: unknown speed: 0 Mbps duplex: unknown
      vnet2 link: unknown speed: 0 Mbps duplex: unknown
      -bash-3.00#
      ##################################################################################
      and this configuration i give while setting up scinstall
      Cluster Transport Adapters and Cables <<<
      You must identify the two cluster transport adapters which attach
      this node to the private cluster interconnect.

      For node "test1",
      What is the name of the first cluster transport adapter [vnet1]?

      Will this be a dedicated cluster transport adapter (yes/no) [yes]?

      All transport adapters support the "dlpi" transport type. Ethernet
      and Infiniband adapters are supported only with the "dlpi" transport;
      however, other adapter types may support other types of transport.

      For node "test1",
      Is "vnet1" an Ethernet adapter (yes/no) [yes]?

      Is "vnet1" an Infiniband adapter (yes/no) [yes]? no

      For node "test1",
      What is the name of the second cluster transport adapter [vnet3]? vnet2

      Will this be a dedicated cluster transport adapter (yes/no) [yes]?

      For node "test1",
      Name of the switch to which "vnet2" is connected [switch2]?

      For node "test1",
      Use the default port name for the "vnet2" connection (yes/no) [yes]?

      For node "test2",
      What is the name of the first cluster transport adapter [vnet1]?

      Will this be a dedicated cluster transport adapter (yes/no) [yes]?

      For node "test2",
      Name of the switch to which "vnet1" is connected [switch1]?

      For node "test2",
      Use the default port name for the "vnet1" connection (yes/no) [yes]?

      For node "test2",
      What is the name of the second cluster transport adapter [vnet2]?

      Will this be a dedicated cluster transport adapter (yes/no) [yes]?

      For node "test2",
      Name of the switch to which "vnet2" is connected [switch2]?

      For node "test2",
      Use the default port name for the "vnet2" connection (yes/no) [yes]?
      ############################################################################

      i have setup the configurations like.
      ldm list -l nodename
      NODE1<>
      NETWORK
      NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
      vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:61:63 1 1500
      vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f8:87:27 1 1500
      vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:f8:f0:db 1 1500
      ldm list -l nodename
      NODE2<>
      NETWORK
      NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
      vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:a1:68 1 1500
      vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f9:3e:3d 1 1500
      vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:fb:03:83 1 1500


      ldm list-services

      VSW
      NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      primary-vsw0 primary 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
      cluster-vsw0 primary 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
      cluster-vsw1 primary 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on

      ldm list-bindings primary
      VSW
      NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      primary-vsw0 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
      PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
      vnet1@gitserver 00:14:4f:f8:c0:5f 1 1500
      vnet1@racc2 00:14:4f:f8:2e:37 1 1500
      vnet1@test1 00:14:4f:f9:61:63 1 1500
      vnet1@test2 00:14:4f:f9:a1:68 1 1500
      NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      cluster-vsw0 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
      PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
      vnet2@test1 00:14:4f:f8:87:27 1 1500
      vnet2@test2 00:14:4f:f9:3e:3d 1 1500
      NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
      cluster-vsw1 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
      PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
      vnet3@test1 00:14:4f:f8:f0:db 1 1500
      vnet3@test2 00:14:4f:fb:03:83 1 1500




      Any Idea Team, i beleive the cluster interconnect adapters were not successfull.

      I need any guidance/any clue, how to correct the private interconnect for clustering in two guest LDOMS.