I patched a T5240 Solaris 10 server having root and zone filesystems in zfs using Live Upgrade . patching and luactivate went on successfully .After reboot the server booted from the new BE but some of the zones did not boot while others started The error for the non booting zones while trying to boot manually is
zoneadm -z dgpb003z boot
zoneadm: zone 'dgpb003z': zone root /zones/dgpb003z/root is reachable through /zones/dgpb003z-sol10_u7_be3/lu/b
zoneadm: zone 'dgpb003z': call to zoneadmd failed
dgpb003z-sol10_u7_be3 is the old BE.
How can the zone be made to point to the new BE
Any help is appreciated
Sorry I don't use LiveUpdate, so I don't know how it effects zones. Did you properly shut down the zones before you patched the global zones? Is the zones newer then the last clone of the other disk?
I hope this helps.
AS I used luupgrade the patching was done on the inactive boot environment created through lucreate . After luactivate activated the newly patched boot environment , the server booted but some zones did not come up THey were showing in installed state
The clone is made in the same root pool and zone pool as the original environment
Looks like after luactivate and rebooting the server with the newly created environment the zone paths seem to still point to the old environment On top of that the server would not boot from the original environment also . Looks to using lucreate on zfs is not reliable and has issues
Has someone come across this i problem Now How can it be made sure that the zone paths are pointing to the patched zone root