This discussion is archived
1 Reply Latest reply: Dec 20, 2012 12:33 PM by Nik RSS

unplumb net0 in public network, the HA NFS service didn't failover

980954 Newbie
Currently Being Moderated
I have two node consisting the cluster 4.1. I set up the NFS service on the cluster, as below.

root@sgh28h13:~# scstat -g

-- Resource Groups and Resources --
Group Name Resources
---------- ---------
Resources: resource-group-1 sgh28cluster global_Sym_R5_1G_d110-rs nfs-global-Sym_R5_1G-d110-admin-rs
-- Resource Groups --

Group Name Node Name State Suspended
---------- --------- ----- ---------
Group: resource-group-1 sgh28h13 Online No
Group: resource-group-1 sgh28h17 Offline No
-- Resources --
Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: sgh28cluster sgh28h13 Online Online - LogicalHostname online.
Resource: sgh28cluster sgh28h17 Offline Offline

Resource: global_Sym_R5_1G_d110-rs sgh28h13 Online Online
Resource: global_Sym_R5_1G_d110-rs sgh28h17 Offline Offline

Resource: nfs-global-Sym_R5_1G-d110-admin-rs sgh28h13 Online Online - Service is online.
Resource: nfs-global-Sym_R5_1G-d110-admin-rs sgh28h17 Offline Offline

The NFS service is on sgh28h13 originally. sgh28h13 has one interface net0 in public network for this service. So I use "ifconfig unplumb net0" to shutdown net0 so the NFS service could failover to the other node. But the service is still online on sgh28h13 after I shutdown the net0 and sc_ipmp0. I don't know why the NFS service didn't failover. Could somebody help?

root@sgh28h13:~# ipadm
clprivnet0 ip ok -- --
lo0 loopback ok -- --
lo0/v4 static ok --
lo0/v6 static ok -- ::1/128
net0 ip ok sc_ipmp0 --
net1 ip ok -- --
net2 ip ok -- --
sc_ipmp0 ipmp ok -- --
sc_ipmp0/static1 static ok --

帖子经 user9111646编辑过


  • Correct Answers - 10 points
  • Helpful Answers - 5 points