I'm facing this error when trying to create new BE (for Solaris patch installation):
bash-3.00# lucreate -n upgradeBE
Analyzing system configuration.
Comparing source boot environment <root> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <upgradeBE>.
Source boot environment is <root>.
Creating boot environment <upgradeBE>.
Cloning file systems from boot environment <root> to create boot environment <upgradeBE>.
Creating snapshot for <rpool/ROOT/root> on <rpool/ROOT/root@upgradeBE>.
Creating clone for <rpool/ROOT/root@upgradeBE> on <rpool/ROOT/upgradeBE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/upgradeBE>.
Creating snapshot for <rpool/ROOT/root/var> on <rpool/ROOT/root/var@upgradeBE>.
Creating clone for <rpool/ROOT/root/var@upgradeBE> on <rpool/ROOT/upgradeBE/var>.
Setting canmount=noauto for </var> in zone <global> on <rpool/ROOT/upgradeBE/var>.
ERROR: cannot mount '/.alt.tmp.b-Ysf.mnt//var': directory is not empty
ERROR: cannot mount mount point </.alt.tmp.b-Ysf.mnt/var> device <rpool/ROOT/upgradeBE/var>
ERROR: failed to mount file system <rpool/ROOT/upgradeBE/var> on </.alt.tmp.b-Ysf.mnt/var>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to mount ABE <upgradeBE>
ERROR: Unable to clone the existing file systems from boot environment <root> to create boot environment <upgradeBE>.
ERROR: Cannot make file systems for boot environment <upgradeBE>.
it seems to be a known bug:
I understand these steps from link above:
+Unfortunately we can't simply mount -F lofs / /mnt, since Solaris "mounts" /var to /mnt/var as well (unlike Linux mount --bind ...). So the easiest way is to luactivate a working BE, boot into it and fix the bogus root filesystem of the BE you came from. E.g.:+ zfs set mountpoint=/mnt rpool/ROOT/buggyBE zfs mount rpool/ROOT/buggyBE rm -rf /mnt/var/* ls -al /mnt/var zfs umount /mnt zfs set mountpoint=/ rpool/ROOT/buggyBE
but I don't understand these steps. I have to boot into buggyBE, but will it work, as it's BE which wasn't fully created.
That is the "test" BE? Is it my current working root BE? Why do I have to remove it? Finally luactivate the buggyBE, boot into it and delete the incomplete BE and destroy all ZFS left over from the previously failed lucreate. E.g.: ludelete test
+# delete the remaining BE ZFS, e.g.+ zfs list -t all |grep test zfs destroy rpool/ROOT/test/var zfs destroy rpool/ROOT/test zfs destroy rpool/ROOT/`lucurr`/var@test zfs destroy rpool/ROOT/`lucurr`@test
+# and do not forget the cloned zone filesystems/snapshots+
Or is there any better way how to fix it? - Like going into single mode and delete /var/*, than boot back and try to create new BE again?
Hmm, the first thing to do with any Live Upgrade problems is to make sure that you have most resent copy of it.. Do you know from which Solaris release you have your lu-commands? Or which version of them you are using?