2 Replies Latest reply: Aug 15, 2013 6:01 PM by user1539537 RSS

    Live Upgrade fails on cluster node with zfs root zones

    user200114
      We are having issues using Live Upgrade in the following environment:

      -UFS root
      -ZFS zone root
      -Zones are not under cluster control
      -System is fully up to date for patching

      We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.

      Here is the output of a Live Upgrade:

      bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
      Determining types of file systems supported
      Validating file system requests
      The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
      The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
      Preparing logical storage devices
      Preparing physical storage devices
      Configuring physical storage devices
      Configuring logical storage devices
      Analyzing system configuration.
      Comparing source boot environment <sol10> file systems with the file
      system(s) you specified for the new boot environment. Determining which
      file systems should be in the new boot environment.
      Updating boot environment description database on all BEs.
      Updating system configuration files.
      The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
      Creating configuration for boot environment <sol10-20110505>.
      Source boot environment is <sol10>.
      Creating boot environment <sol10-20110505>.
      Creating file systems on boot environment <sol10-20110505>.
      Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
      Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
      Mounting file systems for boot environment <sol10-20110505>.
      Calculating required sizes of file systems for boot environment <sol10-20110505>.
      Populating file systems on boot environment <sol10-20110505>.
      Checking selection integrity.
      Integrity check OK.
      Preserving contents of mount point </>.
      Preserving contents of mount point </var>.
      Copying file systems that have not been preserved.
      Creating shared file system mount points.
      Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
      Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
      Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
      Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
      Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
      Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
      Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
      Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
      Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
      Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
      Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
      Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
      WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
      list of <2> potential problems (issues) that were encountered while
      populating boot environment <sol10-20110505>.
      INFORMATION: You must review the issues listed in
      </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
      resolved. In general, you can ignore warnings about files that were
      skipped because they did not exist or could not be opened. You cannot
      ignore errors such as directories or files that could not be created, or
      file systems running out of disk space. You must manually resolve any such
      problems before you activate boot environment <sol10-20110505>.
      Creating compare databases for boot environment <sol10-20110505>.
      Creating compare database for file system </var>.
      Creating compare database for file system </>.
      Updating compare databases on boot environment <sol10-20110505>.
      Making boot environment <sol10-20110505> bootable.
      ERROR: unable to mount zones:
      WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
      WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
      WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
      WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
      WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
      zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
      zoneadm: zone 'img1': call to zoneadmd failed
      ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
      ERROR: unmounting partially mounted boot environment file systems
      ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
      ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
      ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
      Making the ABE <sol10-20110505> bootable FAILED.
      ERROR: Unable to make boot environment <sol10-20110505> bootable.
      ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
      ERROR: Cannot make file systems for boot environment <sol10-20110505>.


      Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade

      Thanks
        • 1. Re: Live Upgrade fails on cluster node with zfs root zones
          935364
          I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
          When attempting to do a "lumount s10u9c33zfs", it gave the following error:

          ERROR: unable to mount zones:
          zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
          zoneadm: zone 'edd313': call to zoneadmd failed
          ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
          ERROR: unmounting partially mounted boot environment file systems
          ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
          ERROR: cannot mount boot environment by name <s10u9c33zfs>

          The solution in this case was:
          zonecfg -z edd313
          info ;# display current setting
          remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
          verify ;# check change
          commit ;# commit change
          exit
          • 2. Re: Live Upgrade fails on cluster node with zfs root zones
            user1539537

            The answer to this question, apparently, is that the zone roots must be UFS if the global root is UFS. At least from experimentation (I have one machine working fine in this config, and a new machine NOT working fine in the config you described as I'd forgotten that rule in the interim). Luckily zone roots are not all that large, generally.