0 Replies Latest reply: Dec 14, 2011 6:23 PM by 900070 RSS

    ZFS failure mode

    900070
      One of my zpools has just become "UNAVAILABLE", and I was wondering if there is a solution, other than restore from backup (which I don't have).

      My 'archive' zpool was configured as a RAID2z pool. After increasing the number of disks in the system, I wanted to expand the pool and add the new disks to the RAID2z partition.

      Due to an error on my side, I accidentally added the four new disks as singletons to the pool, I believe that makes them a stripe. I was not overly concerned until I found out that it is not possible to remove drives, other than through making a backup, dropping the zpool , recreating and restoring. I started working on my backup.

      Alas, before my backup was ready to receive the data, one of the new drives (one of the singletons) failed - it no longer spins-up. As a result, zpool status reports that the zpool is unavailable and that the drive in question is missing.

      Is there a way to force ZFS to mount the pool and extract as much data as possible, given the fact that the failure happened very soon after the drives had been added? Some data-loss is more acceptable than all data lost in this case.

      Kind regards...