I've posted here once already. I've been on my first dive into solaris for the last week now. I had a question about ZFS as i'm still wrapping my head around some things that are not entirely new to me, but different than what i'm used to.
if i create a raidz2 called tank for examped
then i create tank/share
i understand that tank/share is treated as a seperate filesystem, which is interesting, but what if i add an SSD drive to tank for cache, does tank/share use it as well?
did I phrase that right??
tank/share wlii use SSd as cache also.
All avalable hardware resource at one pool ( disk; cache (Read-zilla); Log-device ( Write-zilla) ) shared between al dataset ( FS) on this pool.
Thanks for considering a redundant ZFS storage pool (you'll be glad you did).
A few more things to think about:
1. Although a RAIDZ pool maximizes space, this pool configuration is best for large I/Os like
streaming video. Mirrored pools are better for small reads and writes. If you are looking for
best performance, you should test this config with your workload.
2. Consider that a separate log device is best for improving NFS synchronous write performance.
3. Consider that a separate cache device is best for improving read performance.
I've have been slowly going over a lot of the documentation.
I will be adding 1-2 SSD's for caching, i may use one as a log device, i'm not sure yet. I only have 6 hotswap bays in this rack unit, so running separate mirrored pools for the shares with smaller files to improve performance is not going to happen. I'm limited by my network bandwidth anyhow so I'm not too worried about it. I went with what I thought was the most optimal configuration for what I have and I'm happy with that, the main thing was redundant storage, and dual parity was a must after I had a raid5 fail on me last year.
One thing I've noticed, I have network drives connected to the shares on the solaris box from my windows machine, for some reason, Windows is reporting the drives to be 6.57 tb when on solaris it reports as 7.1tb
It was reporting as 7.1tb on Windows before. the only thing thats changed is i've created a couple sub file systems on the pool. is this normal? or is windows just reporting incorrectly?
If you're okay with RAIDZ2 performance then that's fine. This is a large pool (7.1 TBs?) so
be sure that you have good backups and you are monitoring for disk failures and other
Regarding the space discrepancy between Windows and Solaris, are you looking at the
7.1 TB with zpool list or zfs list?
My best guess is that there is going to be space discrepancies between Windows and ZFS.
I don't know because I never use Windows but for legacy commands like du and df, they
are unaware of descendent ZFS file systems and so on.
Let's get back to the RAIDZ space issue though. Some amount of pool space will be consumed
by RAIDZ parity so what is available at the pool level will be different (and smaller) than
what it available to the pool's file systems.
I have a good explanation here:
ZFS Storage Pool Space Reporting
See the RAIDZ storage pool bullet. The raw pool size is 408 GB but the space
that is available to the file systems is 133 GB.
You should also review the ZFS best practices section, here:
well, i understand that some will be used for the parity. if you look at the details I posted above, there's 10.8T when you look at zpool info, but only 7.1T when you look at du, this makes sense because of the parity and formatting
but now it seems to have shrunk...significantly, I did set quotas on a couple slices, but i was under the impression those wouldn't affect it. either way, im sure it's working the way its supposed to, i'm not too concerned. it just seemed odd that it seems to have shrunk
actually, im looking at it now, i see why
its because the child systems are shared, and they're reporting the free space on the parent pool as the disk size, it's odd, but it is correct when i look at the numbers
Edited by: Stlouis1 on Dec 13, 2012 2:59 PM