0 Replies Latest reply: Jun 14, 2012 1:13 AM by 832842 RSS

    Cleaing up Node after automatic recovery has moved a Zone

    832842
      I have a server pool with two servers and one Solaris 11 zone, automatic recovery is enabled. Automatic recovery migrates the zone to the other server in case the sever on which it is currently running is powered off, so far so good.

      But after booting this server again, it has a damaged zpool:
      -----
      pool: z21d28adc-12d6-40e2-814b-0134711cabe4
      state: UNAVAIL
      status: One or more devices could not be opened. There are insufficient
      replicas for the pool to continue functioning.
      action: Attach the missing device and online it using 'zpool online'.
      see: http://www.sun.com/msg/ZFS-8000-3C
      scan: none requested
      config:

      NAME STATE READ WRITE CKSUM
      z21d28adc-12d6-40e2-814b-0134711cabe4 UNAVAIL 0 0 0 insufficient replicas
      /var/mnt/virtlibs/1339595115368/db392491-d78f-41b0-8278-9164c11224d0/virtdisk/data UNAVAIL 0 0 0 cannot open
      -----


      and of course a zone which is not running:
      -----
      ID NAME STATUS PATH BRAND IP
      0 global running / solaris shared
      - Sol11s71 installed /var/mnt/oc-zpools/21d28adc-12d6-40e2-814b-0134711cabe4/17e66dda-02b7-4ec5-875b-b5f14ddb6575 solaris excl
      ------


      this is so far correct as this zone is running on the ohter node now.
      Is there a way to automatically clean up this remainings, or do I have to uninstall / delete the zone manual and to destroy the pool?

      Fritz

      Edited by: Fritz on 13.06.2012 23:13