This discussion is archived
4 Replies Latest reply: Mar 14, 2013 11:56 AM by HartmutStreppel RSS

Sun cluster 3.3 in guest LDOM - disable IPMP for public

871253 Newbie
Currently Being Moderated
Hi,
i have setup HA-NFS with Sun cluster 3.3 in two guest LDOMs on two separate T4-2. The guest LDOMs get one VNET for interconnect and one VNET for public; the vswitch for public is 2x10g port aggregated on the control domain, so there is no need to have IPMP for public interface in the ldom. But sun cluster automatically does IPMP for public even with one vnet as failover however is it coming up as failed.

I have the exact same setup for another cluster in LDOMs and public interface IPMP with one vnet is working fine. Can't seem to get this one working. any tips?

NODE 1:

vnet2: flags=19000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,FAILED> mtu 1500 index 3
inet 192.168.130.141 netmask ffffff80 broadcast 192.168.130.255
groupname sc_ipmp0
ether 0:14:4f:f8:b9:47
vnet2:1: flags=1011040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FAILED,FIXEDMTU> mtu 1500 index 3
inet 192.168.130.150 netmask ffffff80 broadcast 192.168.130.255


cat /etc/hostname.vnet2
nfsp0gux001 group sc_ipmp0 -failover


clrs status -v

=== Cluster Resources ===

Resource Name Node Name State Status Message
------------- --------- ----- --------------
nfsp-rs nfsp0gux002 Offline Offline
nfsp0gux001 Online Online - Service is online.

nfsp-hastp-rs nfsp0gux002 Offline Offline
nfsp0gux001 Online Online

nfsp-lh-rs nfsp0gux002 Offline Offline
nfsp0gux001 Online Degraded - IPMP Failure.

Node 2:

vnet2: flags=19000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,*NOFAILOVER,FAILED*> mtu 1500 index 3
inet 192.168.130.142 netmask ffffff80 broadcast 192.168.130.255
groupname sc_ipmp0
ether 0:14:4f:f8:9f:25
  • 1. Re: Sun cluster 3.3 in guest LDOM - disable IPMP for public
    HartmutStreppel Explorer
    Currently Being Moderated
    A couple of things:
    - OSC's logic with the public network relies on IPMP, even if there is only one network "interface" behind it. So removing IPMP won't help
    - You are saying that the link aggregation in the IO domain works well?
    - Are you saying that the vnet within the guest works well as well? But IPMP complains about the interface being down?
    - Can you use the underlying interface with no problems from within the guest?
    - Did you try configuring IPMP in the IO domain instead of a link aggregation? Does this help?

    What are the IPMP error messages in /var/adm/messages?

    I do not think this is a cluster issue, but a networking one - either with LDOMs or Solaris. OSC is only interested in the status of the IPMP group, but does not set it. That is done by the base OS (in.mpathd)

    Regards
    Hartmut
  • 2. Re: Sun cluster 3.3 in guest LDOM - disable IPMP for public
    871253 Newbie
    Currently Being Moderated
    Thanks for the reply.

    Yeah, it was a network config issue caused by SC because it changed /etc/hostname.vnet2 to -failover during cluster configuration. So after replacing failover with up, the IPMP group is showing up now. This is weird behaviour because I have another 2 nodes cluster in guest ldom setup the same way with this same configuration for public (one interface) IPMP but that one did not cause any issue.

    Thanks for the help though.

    cat /etc/hostname.vnet2
    nfsp0gux001 group sc_ipmp0 -failover

    cat /etc/hostname.vnet2
    nfsp0gux001 group sc_ipmp0 up
  • 3. Re: Sun cluster 3.3 in guest LDOM - disable IPMP for public
    871253 Newbie
    Currently Being Moderated
    IPMP issue with one interface in guest LDOM.
  • 4. Re: Sun cluster 3.3 in guest LDOM - disable IPMP for public
    HartmutStreppel Explorer
    Currently Being Moderated
    So it seems this is not easy to reproduce? If you ever happen to see this again during the OSC installation, please remember your steps and let us know. This must not happen as part of the installation process.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points