zfs's RAID disk replacement not going smoothly
edited Apr 20, 2012 7:12AM in Oracle Solaris File Systems and Disk Management (MOSC) 6 commentsAnswered
Hi, I have a relic SunFire V245 (Solaris 10) that has a RAID attached, with a raidz setup. Until now, my method of replacing failed disks
has worked -- so at this point I am not quite sure what to do. Here is my normal procedure.
1) disk fails
2) pull disk
3) log onto RAID, delete LUN, delete logical drive, and then re-create both for failed disk (all via the RAID - not through the Solaris box)
4) zpool status and note the UNAVAIL drive
5) zpool replace
Now -- as I learn zfs, I see all kinds of other options that I'm not entirely sure I need to do (since it isn't a mirrored pool).
has worked -- so at this point I am not quite sure what to do. Here is my normal procedure.
1) disk fails
2) pull disk
3) log onto RAID, delete LUN, delete logical drive, and then re-create both for failed disk (all via the RAID - not through the Solaris box)
4) zpool status and note the UNAVAIL drive
5) zpool replace
Now -- as I learn zfs, I see all kinds of other options that I'm not entirely sure I need to do (since it isn't a mirrored pool).
0