So as to put out of service my "old" storage system, I removed from my Solaris 10 Sparc M4000, all the the lun.
I removed the zoning and now I have the following :
c1 fc-fabric connected configured unknown
c1::500601603de031f7 disk connected configured unknown New storage
c1::500601613ce03323 disk connected configured failing Old Storage
c1::500601683ce03323 disk connected configured failing Old Storage
c1::500601693de031f7 disk connected configured unknown New storage
cfgadm -al -o show_SCSI_LUN | grep fail
c1::500601613ce03323,0 disk connected configured failing
c1::500601683ce03323,0 disk connected configured failing
I don't succeed in removing these 2 pathes.
Could anyone help me ?
Thanks a lot.
How did you remove the devices?
I'm no cfgadm expert but I think there is process for removing devices with cfgadm, depending upon whether the device is hot-pluggable or not, and I think a step prior to physically removing the device is to issue the cfgadm remove_device command, unless the device is hot-pluggable.
Perhaps at this point, you should attempt to clean up the old device links?
# devfsadm -C
Below is posting from Oracle SAN Community for customers goes into this subject:
May 1, 2013 1:23 PM
Unconfiguring and removing LUNs and targets in Solaris without rebooting
When adding luns, it is clear that it is a "bottom up" operation. The very first thing is that the lun is discovered by Solaris, it gets configured, and then it can be put to use by various applications. The opposite is true when removing luns. It should be viewed as a "top down" approach. The lun has to be released by it's application(s) first, then and only then will Solaris be able to unconfigure it and clean up device entries.
Normally, when a lun is no longer seen by Solaris 10 it will be listed as failed/failing in cfgadm. Then, if there are no applications/processes still holding onto the device entry in the device tree it will change to unusable state within a minute. If the lun is still listed in cfgadm as "failing" after a minute then most likely there is some application/process still holding onto it.
This could be as simple as unmounting a filesystem, or could be things like VxVM, Netbackup, Cluster software, Oracle, ZFS, SVM, HDLM, Powerpath, etc. We cannot predict what all those apps could be nor how exactly one would safely remove the lun(s) from use in all of those possible applications.
Sometimes we can work around issues and clean it up, sometimes we cannot. VxVM is the biggest offender here, as people don't realize that if the device is still seen in the output of 'vxdisk list' then we are not going to be able to unconfigure it (or clean up "unusable" entries) until VxVM is told to "forget" about that device.
The quickest, cleanest way to resolve this is to reboot the server.
It is sometimes (but not always) possible to determine what process (such as, Veritas volume Manager or others) that is holding onto the device entries in the device tree but many times we are unable to determine process and have to reboot anyway.
There is not going to be any one single universal way to remove devices. There are too many variables that affect the equation.
The following customer accessible documents on My Oracle Support portal go into somethings that can be attempted find/release application/process hold on device which would move the lun from "failing" to a "unusable" state in cfgadm. If successful then the "unusable" devices can then be cleared from cfgadm using appropriate command.
Relevant KM documents:
1017942.1 Solaris[TM] 8/9: How to remove device entries for SAN-attached storage using the Sun StorEdge[TM] SAN Foundation Kit software
1020671.1 VxVM:Solaris LUN Reconfiguration Guidelines
1018716.1 How to Unconfigure a Single Lun from a Target which has multiple Luns