I have never used Veritas Cluster monitoring or related software, so this is outside of my experience. But, did you attempt to ignore the "Warning" and continue to patch your alternate boot environment? It sounds like the monitoring software has just detected the creation of the dataset for the new zone BE (which should not yet be mounted).
Sadly, this is difficult, as the Cluster cannot monitor the state of the Zpool in this condition - you immediately loose your high availability!
I tried the following workaround:
Setting "mountpoint" to "legacy" for the ZFS "../zone-newBE" ... this fixes the HA.
No idea, what it does to the LU ...
Will test tomorrow.
I second the motion....
Can we please, please, please add an option to exclude any local zones from "lucreate"?
There are many good reasons not to copy all zone paths while the zone is up and running by using lucreate.
Real world scenario:
Production server with 10-20 zones
Zone paths are all separate Veritas volumes and Veritas File Systems
Down time must be kept to a minimum
Pre-work for luactivate begins days before
lucreate, luupgrade, patchadd, pkgrm/pkadd taked hours of work, zones continue to stay up during pre-work
lucreate copies zone paths into ABE, which is the to-become root file system after luactivate
This copy is of a running zone, which eventually will be days old and unreliable
The zone path after luactivate becomes a mount point for the Veritas Volumes, it now holds data which consumes space in / on the global zone, soon to be hidden by the Veritas volume mounted on top of it
Once luactivate is run, if problems are encountered, we need to be able to roll back changes, thus zone path needs to remain unmodified
If all goes well, then and only then, do we want to bring all local zones up to the same OS/patch level as the global zone, using
zoneadm -z zonename attach -u
Current default behaviour of lucreate provides no provision to exclude all zone paths, even if placed in exlcude file, it gets copied into the ABE