This content has been marked as final. Show 6 replies
I do have two suggestions:
- create a new partition (I always use number 6) that starts at cylinder 1, not 0. This may not be needed in every single case (only really important for disks you boot off) but it is a good practice nontheless
- do not use raid 6 for databases - see http://www.baarf.com/
raid 6 does not perform as well as raid 1 or raid 10. Depending on your workload, you may get away with it but I'd simply refuse to put a database on anything with parity data. And while you did not specify the performance requirements for this systems, you mention that you only need less than 200GB anyway, so you don't even need all the space a raid6 would provide.
First off, RAID-5 or RAID-6 is fine for database performance unless you have some REALLY strict and REALLY astronomical performance requirements. Requirements that someone with lots of money is willing to pay to meet.
You're running a single small x86 box with only onboard storage.
So no, you're not operating in that type of environment.
Here's what I'd do, based upon a whole lot of experience with Solaris 10 and not so much with Solaris 11, and also assuming this box is going to be around for a good long time as an Oracle DB server:
1. Don't use SVM for your boot drives. Use the onboard RAID controller to make TWO 2-disk RAID-1 mirrors. Use these for TWO ZFS root pools. Why two? Because if you use live upgrade to patch the OS, you want to create a new boot environment in a separate ZFS pool. If you use live upgrade to create new boot environments in the same ZFS pool, you wind up with a ZFS clone/snapshot hell. If you use two separate root pools, each new boot environment is a pool-to-pool actual copy that gets patched, so there are no ZFS snapshot/clone dependencies between the boot environments. Those snapshot/clone dependencies can cause a lot of problems with full disk drives if you wind up with a string of boot environments, and at best they can be a complete pain in the buttocks to clean up - assuming live upgrade doesn't mess up the clones/snapshots so badly you CAN'T clean them up (yeah, it has been known to do just that...). You do your first install with a ZFS rpool, then create rpool2 on the other mirror. Each time you do an lucreate to create a new boot environment from the current boot environment, create the new boot environment in the rpool that ISN'T the one the current boot environment is located in. That makes for ZERO ZFS dependencies between boot environments (at least in Solaris 10. Although with separate rpools, I don't see how that could change....), and there's no software written that can screw up a dependency that doesn't exist.
2. Create a third RAID-1 mirror either with the onboard RAID controller or ZFS, Use those two drives for home directories. You do NOT want home directories located on an rpool within a live upgrade boot environment. If you put home directories inside a live upgrade boot environment, 1) that can be a LOT of data that gets copied, 2) if you have to revert back to an old boot environment because the latest OS patches broke something, you'll also revert every user's home directory back.
3. That leaves you 10 drives for a RAID-6 array for DB data. 8 data and two parity. Perfect. I'd use the onboard RAID controller if it supports RAID-6, otherwise I'd use ZFS and not bother with SVM.
This also assumes you'd be pretty prompt in replacing any failed disks as there are no global spares. If there would be significant time before you'd even know you had a failed disk (days or weeks), let alone getting them replaced, I'd rethink that. In that case, if there were space I'd probably put home directories in the 10-disk RAID-6 drive, using ZFS to limit how big that ZFS file system could get. Then use the two drives freed up for spares.
But if you're prompt in recognizing failed drives and getting them replaced, you probably don't need to do that. Although you might want to just for peace of mind if you do have the space in the RAID-6 pool.
And yes, using four total disks for two OS root ZFS pools seems like overkill. But you'll be happy when four years from now you've had no problems doing OS upgrades when necessary, with minimal downtime needed for patching, and with the ability to revert to a previous OS patch level with a simple "luactivate BENAME; init 6" command.
If you have two or more of these machines set up like that in a cluster with Oracle data on shared storage you could then do OS patching and upgrades with zero database downtime. Use lucreate to make new boot envs on each cluster member, update each new boot env, then do rolling "luactivate BENAME; init 6" reboots on each server, moving on to the next server after the previous one is back and fully operational after its reboot to a new boot environment.
Thanks, "5287726", for the very thorough reply. A few questions/observations:
Based on the rest of #1, I think your main point is: do not use ONLY hardware RAID for the boot drives, that slapping the extra layer of redundancy (via the mirrored ZFS pool) will make dealing with upgrades a whole lot easier.
1. Don't use SVM for your boot drives.
Yes, excellent point, and one that I actually learned after my initial post: for numerous reasons (including the live upgrade scenario that you mention), it is never good to run the Oracle software on the same filesystem as the OS.
2. ...You do NOT want home directories located on an rpool within a live upgrade boot environment.
My RAID controller does indeed support RAID-6, so I'll go that route. But re: using ZFS only: I've been burned by having 'software RAID-only' (the database files on mirrored ZFS pools w/no RAID parity): last year, a power outage (combined with a usesless UPS) irrevokably corrupted the ZFS data pool: fortunately it was not in production, but I had major egg on my face: my x-86 server was the only one that had problems - all the Windows and Solaris HW RAID servers came back on line just fine. This left me wary of any 'ZFS-only' configurations...anecdotally, among my limited peer group, I have come across nobody willing to put all their eggs in the ZFS basket.
3. ...otherwise I'd use ZFS and not bother with SVM
RE prompt recognition of drive failures
I think I'd be more comfortable with some global spares in the hopper - @ the 'Standard' support level, Oracle usually takes few days to get replacement disks delivered.
Thanks again for the insight, much appreciated!
Don't mirror the root pools with ZFS - use the 4 disks in two separate hardware RAID-1 LUNs, put a separate ZFS pool on each one. (One will be created automatically on install - IIRC that one will be called "rpool").
Make another ZFS pool (rpool2 maybe) on the other two-disk RAID-1 array. When you do a live upgrade, if the current boot environment (OS install) exists in rpool, create the new boot environment with lucreate on rpool2. If the current boot environment exists on rpoo2, create the new boot environment on rpoo1. That way the live upgrade process will just copy data between pools - it won't create a mess of file system snapshots and clones.
You do wind up using four hard drives to support a single OS installation, but it's easily maintainable while still allowing you to use live upgrade to minimize downtime for patching and updates. If you put all those live upgrade boot environments that can accumulate over a long server lifetime into one ZFS rpool, things can get REALLY nasty to maintain.
IMO it's better to use hardware RAID when possible so disk replacement is easier - pull out the dead one, put in the new one, and watch the hardware RAID controller fix it.
That's an important clarification - creating two separate rpools for the OS is what gives you the ludpate flexibility (mirroring a single root pool would not provide the same benefits)...
Thanks again for all the good info. These oracle forums are very hit and miss, in my experience...somewhat analagous to a pubic toilet: to a large degree, the expericence is defined by who was in there before you.