0 Replies Latest reply: Oct 8, 2012 3:48 PM by idawong RSS

    unable to lucreate on vdisk

      i have a setup with
      2.2 VM
      08/11 solaris 10 both guest and primary ldom
      guest ldom are built with zfs. no ODS.
      the guest consists of 1 SAN disk controlled by zpool and all filesystems are zfs

      i want to test live upgrade on a guest ldom. so i added a new vdisk which is basically a san disk of an idenitcal size as the disk for the guest's root disk. it is presented in the same way as well. both are SMI disks and they are full disks added the primary service and presented as full disk to the guest.

      when i run
      # lucreate -c root-pool -n patched_BE -p patch-rpool
      Analyzing system configuration.
      No name for current boot environment.
      Current boot environment is named <root-pool>.
      Creating initial configuration for primary boot environment <root-pool>.
      INFORMATION: No BEs are configured on this system.
      The device </dev/dsk/c0d0s2> is not a root device for any boot environment; cannot get BE ID.
      PBE configuration successful: PBE name <root-pool> PBE Boot Device </dev/dsk/c0d0s2>.
      ERROR: ZFS pool <patch-rpool> does not support boot environments
      ERROR: The disk used for ZFS pool is EFI labeled.
      WARNING: The boot environment definition file </etc/lutab> was removed because the PBE was not successfully defined and created.

      there is no boot environment yet. so i expected it cannot get BE ID. but why does it think the disk is EFI labled?
      after running lucreate, it seems that the disk was labeled in EFI format.
      does anyone have any solution to this? or seen this before?