This content has been marked as final. Show 5 replies
You are correct, that the mirrored root pool disks are not maintaining the boot environments.
What is your goal? If you want to be able to rollback to a previous BE and mirror a ZFS root pool
for redundancy, then you can do that if you create your BEs and keep your mirrored disks available
at the same time.
If you want additional redundancy, then you can always add a 3rd disk for a 3-way mirror.
If you want to have a backup root pool with an existing BE, then your best option is to have
a separate 3rd disk as a backup root pool. I would go with a 3-way mirror rather than a backup
If your disks are reliable and your root pool is mirrored, then using beadm create to create a backup
BE should be reasonable redundancy.
In Solaris 11.1, bootadm replaces installgrub. If you are using zpool attach to create a mirrored
root pool, it applies the boot blocks automatically.
I'm not sure how you are removing the disks, but I think this is the root cause of the problem.
All ZFS data is written to both sides of the mirror but if you remove and switch up the disks, then
something is getting disconnected. I have a VB environment on my laptop but I'm unfamiliar with
disconnecting and reconnecting disks in use.
If you keep both sides of the mirrored disks available to ZFS, then things will work as expected.
If a mirrored disk fails, then the root pool continues operation until you replace the disk because
all data is available from the remaining disk.
Once we get past the proof of concept stage, I'll submit this to Oracle Support... in a mission-critical environment we'll want to be 100% certain we understand what is supposed to happen. Right now I still don't understand conceptually how the boot environment isn't available on the mirrored rpool disk, and what's happening when both disks come back online and the boot environments are gone from both disks. I'm sure there's some explanation, haven't found it just yet... Thanks
You might be able to better understand this on bare metal rather than on VB.
If you detach a mirrored root pool disk, the pool info will be unavailable
on that disk until the disk is re-attached. When the disk is reattached, then
all info is resilvered on the re-attached disk.
I don't know what is happening with VB but on bare metal, all data on
all mirrored disks and newly attached disks, is resilvered automatically
so all BE info is always be available.
If you want to retest this on VB by using zpool detach and attach, then
that would also be closer to the bare metal experience.
If you are going to use VB in a mission critical environment, you should
review the ZFS best practices. I believe VB disables disk cache flushing.