Hi - before I build some new Solaris servers I'd like thoughts on the following please. I've previously built our Sun servers using SVM to mirror disks and one of the reasons is when I O/S patch the server I always split the mirrors beforehand and in the event of a failure I can just boot from the untouched mirror - this method has saved my bacon on numerous occasions. However we have just got some T4-1 servers that have hardware RAID and although I like this as it moves away from SVM / software RAID and to hardware RAID I'm now thinking that I will no longer have this "backout plan" in the event of issues with the O/S updates or otherwise however unlikely.
Can anyone please tell me if I have any other options?
If you are using SVM, then I'd consider using RAID, but I'd create ABE partitions so that I could use Live Upgrade so that I can get my back out plan through the magic that is Live Upgrade.
However, if I were you, I'd use ZFS as my boot filesystem, I'll completely ignore RAID and hand a pair of disks to ZFS. That way I can use the snap, clone, rollback functionality of ZFS to enable loads of boot environments.
If you have not looked at how much ZFS helps you operationally, I'd urge you to have a look.
Thanks - just going through the 300 page ZFS admin guide now. I want to ditch SVM as it's clunky and not very friendly whenever we have a disk failure or need to O/S patch as mentioned. One thing I have just read from the ZFS admin guide is that:
"As described in “ZFS Pooled Storage” on page 51, ZFS eliminates the need for a separate volume
manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of
logical volumes, either software or hardware. This configuration is not recommended, as ZFS
works best when it uses raw physical devices. Using logical volumes might sacrifice
performance, reliability, or both, and should be avoided."
So looks like I need to destroy my hardware RAID as well and just let ZFS manage it all. I'll try that, amend my JET template and kick of an install and see what it looks like.
It is subtly different for the disks in the rpool, as the boot loader needs to see slices. So what actually happens is that a single slice is allocated on the disk and used by ZFS.
The JET ZFS section is pretty straightforward, and once a value is filled in the base_config_profile_zfs_disk is filled in, it'll just ignore the UFS settings lower in the template. The comments in the template should make your options pretty clear.
But in short, if you want ZFS to be able to repair itself, it needs a pair of disks, so don't use H/W RAID.