This discussion is archived
4 Replies Latest reply: Jul 9, 2012 1:52 AM by 807928 RSS

NIC failure dtected when the other node goes down

888897 Newbie
Currently Being Moderated
#define "nic" means interface :)

Hello

I have this setup :

2 solaris 10-8-11 ( 32 bits ) - the vmware downloaded image under Vbox

each vm has 1 Gb of ram - desktop disabled , CPU E3400@2600Mhz , each with a 10Gb virtual disk attached ( quota set in zfs ) , 3 nics

on each VM the network configuration from Vbox is the same :

- - - nic0 - Host-only adapter : nodeA 192.168.56.65 ; nodeB 192.168.56.66 /24
- - - nic1 - Internal network : nodeA 1.0.0.65 nodeB 1.0.0.66 /24
. . . nic2 - Internal network : not configured and left alone for cluster communication

On nodeA and B i had a cluster installed but a power cut shut down my pc and lost the nodeb configuration

Now I decided to reinstall the whole thing from beginning

NodeB - complete re-installation
NodeA - restart in non-cluster mode and "#cluster remove" + restart

now the thing is that nodeA keep shutting down all the nics when nodeB is not up ,i had this problem before ( with the older cluster ) but i was thinking it would disappear after I destroyed the cluster

Error is : "NIC failure detected on e1000g1 of group sc_ipmp0" - this happens for the first two nics ( the ones that are configured )

Did i do correctly the deactivation of the cluster in nodeA ? is there any thing more do to to get rid of the useless fencing mechanism that shuts down my nics ? it seems like a zombie cluster fencing :)

And when i start up nodeB , the nics on nodeA magically go up again :)

Edited by: user2995439 on Jul 8, 2012 4:56 AM

Edited by: user2995439 on Jul 8, 2012 4:57 AM

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points