This content has been marked as final. Show 7 replies
Hi,1 person found this helpful
Yes, that is the main functionality of Live upgrade....and with ZFS it's much faster as it creates a snapshot of the original BE. You can also "zpool split" to split off a mirror into a new pool (or rpool)...
And yes, you can have many boot environments which you can luactivate and boot....
I was hoping that would be the case.
I assume that by using 'zfs split', one is able to clone a system onto a new piece of hardware; will this split mirror be 'bootable' or does it need to have the bootblocks installed or anything else done to point to the new BE?
I had a quick play at installing from fresh with a zfs root, however, the installer appeared to use the whole disk on a single slice. Did I miss something during the installation or is it possible to tell the installer to install on a single slice that does not represent the whole disk (like you can with an standard UFS install)?
I agree that you can keep old BE's around, even those from older Solaris 10 updates but I would issue a warning. If you upgrade your root zpool using "zpool upgrade" you may break some of the older boot environments you have lying around. You can't upgrade to anything newer than the version supported by the system in the oldest BE you want to be able to boot. The same is probably true for "zfs udpates" as well.1 person found this helpful
In my experience, ZFS roots are much easier to maintain than UFS-based ones.
Good advice, thanks.
Does this mean that an luupgrade would not automatically upgrade the ZFS version of the root pool then? would this be a task left to the administrator, or would the administrator need to be careful of the updates he is applying just in case it updates the pool?
I have delayed moving to root ZFS as I wanted the technology to settle down a little first, but I feel that the time is probably right now. My experience with SVM and UFS roots has been a very fullfilling one. I have always found them to be very simple yet robust to manage.
I will look forward to experimenting with ZFS roots and seeing if I can manage and recover these roots as easily as I can with UFS.
You will need to play around with the Preserve option from the Solaris 10 installer. If you have a1 person found this helpful
68 GB disk for your root pool and you have a 20 GB slice 0 for the ZFS BE, the installer will reset
slice 0 to 68 GB. However, if you have a 20 GB slice 0 and you also have 20 GB slice 4 and 20 GB
slice 5, and set the Preserve flag for those slices, the installer will honor the 20 GB slice 0, slice 4,
and slice 5.
Keep in mind that ZFS file systems are not tied to a specific disk slice and if you think about it,
whole disks are easier to manage than disk slices. The root pool needs a slice for booting, this
is long-standing boot limitation, but if you carve up the disk for other uses, subsequent management
and recovery is more difficult. If I never have to use the format utility again to create disk slices
personally nor ever describe the 27 steps to create a disk slice for others, I will be very happy.
Consider using smallish disks (and one large disk slice 0) for your root pool and then use whole
disks for your data/user pools.
This is mostly a mind shift, using whole disks, but eventually you'll see the beauty of this design
and should enjoy the ease of administration much more than UFS and SVM.
Thanks for your response. I take your point about using whole disks for pools. Traditionally on my 'data' pools, I have used whole disks. I was considering using a 'sliced disk' for a root pool purely as the particular server I am implementing this on has only 2 300GB disks and I wanted to keep OS and DATA separate.
Do you think therefore, that it would be good practice to keep my zones and data (typically oracle database) on the same pool as the main host OS, when I do not have any additional storage attached or would it be better in this case to slice up my disk?
As far as I know, luupgrade does not automatically do the zpool and zfs updates. These need to be done manually. The reason I made my warning is that it sounded like you wanted to keep older Solaris 10 updates so you could go back in history presumably to test things in an older environment. It is painfully easy to see a new feature you'd love to have in the current release and update your zpool to get it - forgetting that this will break your older system releases. I've made this mistake myself and you can't easily recover from it.
I agree with the other responses that ZFS means thinking differently about storage. You do tend to work more with full disks than slices. You have pools and you grow them by adding disks. You don't use partitions to control the size of things, you apply such controls to filesystems, which are more like directories with storage controls associated with them. The tight coupling between the physical disk structure and the filesystem structure is gone which makes things much easier to manage.