I'm using VirtualBox for testing, and I'm noticing that boot environments are being removed under these conditions:
1) Create an mirrored rpool
2) Make the secondary disk also bootable with this: 'bootadm install-bootloader' (installgrub did not work, so I used this method instead)
3) Run a zpool scrub, just to make sure everything is perfectly normal, no errors.
4) Create a 2 boot environments: beadm create snap1, beadm create snap2. Now, beadm list shows both items.
5) Run "Shutdown" command.
6) In VirtualBox, remove secondary disk, then boot (from primary disk), now I can still see the boot environments I just created, great so far, now power off.
7) In VirtualBox, add the secondary disk back, remove the primary, then boot (from secondary disk), now I see there are no boot environments. At this point I'm becoming aware that the rpool mirror isn't maintaining boot environments, which I didn't expect.
8) In VirtualBox, add primary back so now we have both disks attached pre-boot. Upon boot, there are no boot environments available. Also, if you now remove either of the disks, you can verify that boot environments do not appear on either disk.
Is this the expected behavior, or is there a way to preserve boot environments on both sides of the rpool mirror?
You are correct, that the mirrored root pool disks are not maintaining the boot environments.
What is your goal? If you want to be able to rollback to a previous BE and mirror a ZFS root pool
for redundancy, then you can do that if you create your BEs and keep your mirrored disks available
at the same time.
If you want additional redundancy, then you can always add a 3rd disk for a 3-way mirror.
If you want to have a backup root pool with an existing BE, then your best option is to have
a separate 3rd disk as a backup root pool. I would go with a 3-way mirror rather than a backup
If your disks are reliable and your root pool is mirrored, then using beadm create to create a backup
BE should be reasonable redundancy.
In Solaris 11.1, bootadm replaces installgrub. If you are using zpool attach to create a mirrored
root pool, it applies the boot blocks automatically.
Hi, thanks very much for the advice, I think the 2 disk mirror is enough redundancy. My only remaining question is out of curiosity: how is it that BEs are excluded from being mirrored to the second disk?
I'm not sure how you are removing the disks, but I think this is the root cause of the problem.
All ZFS data is written to both sides of the mirror but if you remove and switch up the disks, then
something is getting disconnected. I have a VB environment on my laptop but I'm unfamiliar with
disconnecting and reconnecting disks in use.
If you keep both sides of the mirrored disks available to ZFS, then things will work as expected.
If a mirrored disk fails, then the root pool continues operation until you replace the disk because
all data is available from the remaining disk.
Once we get past the proof of concept stage, I'll submit this to Oracle Support... in a mission-critical environment we'll want to be 100% certain we understand what is supposed to happen. Right now I still don't understand conceptually how the boot environment isn't available on the mirrored rpool disk, and what's happening when both disks come back online and the boot environments are gone from both disks. I'm sure there's some explanation, haven't found it just yet... Thanks