I am building a solaris 10 X86 public/anonymous ftp server and have a twelve bay sata array enclosure, and twelve 1TB disks to connect to a X2250. The server will be replicated via rsync to a backup server, so I was thinking raidz1 is sufficient zpool redundancy on the servers. Using the "power of two plus parity" best practice for raidz pools, my choices are:
one raidz1 vdev with 10 disks and two hot spares
two raidz1 vdevs, one with 9 disks the other with 3 disks
Using the ten disk vdev with two hot spares does not seem to be an efficient use of the drives (the two hot spares). Using the nine disk and three disk vdevs uses up all the disks, but leaves the system without a hot spare for automatic fail over in the case of a drive fault.
I could ignore the "power of two plus parity" best practice, and setup the pool with:
two raidz1 vdevs, one with five disks the other with six disks, and one hot spare
Comments on the pro's and cons of the listed raidz1 vdev configurations listed above given the twelve bay array, or suggestions on better zpool configurations not listed are appreciated.
Personally, I'm not a fan of large RAIDZ1 pools and never maximize for space but that's easy for me to say.
I don't pay for disk space.
A few things to think about:
1. Do you care about performance? RAIDZ performs well for large I/O like streaming video. Mirrored pools perform better for small reads and writes. A RAIDZ pool with 1 10-disk VDEV would not perform well.
2.RAIDZ1 means that a pool can tolerate the failure of exactly 1 disk. With 2 RAID1 vdevs, the pool can tolerate the failure of 1 disk in each vdev.
3. Spares are recommended, backups are required. If you can't back it up, don't build it. I've seen enough bad things happen with no backups to say backups are a requirement.
You can review other ZFS pool recommendations, here:
I have to second what cindys said. There is no substitute for a backup. Period.
That said, here are some pointers. The number iops (I/O operations per second) of a vdev is roughly equal to the number of iops for a single disk. There are some exceptions for mirrored vdevs in that on reads they behave more like two devices but on write behave as one. Otherwise, figure out how many iops you need then divide by number a single drive will provide to get the number of vdevs needed in the pool.
Once you know the number of vdevs, you next have to balance redundancy with capacity. More parity disks will improve redundancy but hurt capacity since the parity disks must be allocated per vdev. For example, if the number of iops you need requires three vdevs, then each parity disk in a vdev will consume three disks total. Two parity disks will consume 6 total disks, etc.
Here is my thinking about hot spares. I have lost at least one pool because ZFS became confused (an array lost power for a few minutes) and tried to replicate lost disks but did not have enough hot spares. When the array came back, ZFS had convinced itself that it lacked sufficient replicas to do anything at all. Granted, that was several years and several versions ago of ZFS, but it really turned me against hot spares, at least turned me off against auto-replace.
Instead, think of parity disks as pre-built hot spares. If you think you need one parity and one hot-spare, just configure two parities on each vdev. One failure will allow the vdev to continue and still have a parity.
Thanks for the replies. I ended up using two vdevs in 6 disk raidz2 configuration to create the ftp data pool. After reviewing both replies decided going raidz2 was better than using hot spares. I created an hourly script to check the zpool status, so we should know if/when a disk in the pool fails pretty quick.. Also, going with two raidz2 vdves vs two raidz1 vdevs (mismatched with 1 hot spare) actually gives more usable space in the pool.