This discussion is archived
8 Replies Latest reply: Dec 13, 2012 12:00 PM by Stlouis1 RSS

ZFS question

Stlouis1 Newbie
Currently Being Moderated
Hi guys,

I've posted here once already. I've been on my first dive into solaris for the last week now. I had a question about ZFS as i'm still wrapping my head around some things that are not entirely new to me, but different than what i'm used to.

if i create a raidz2 called tank for examped

then i create tank/share

i understand that tank/share is treated as a seperate filesystem, which is interesting, but what if i add an SSD drive to tank for cache, does tank/share use it as well?

did I phrase that right??
  • 1. Re: ZFS question
    Nik Expert
    Currently Being Moderated
    Hi.

    Yes.
    tank/share wlii use SSd as cache also.

    All avalable hardware resource at one pool ( disk; cache (Read-zilla); Log-device ( Write-zilla) ) shared between al dataset ( FS) on this pool.

    Regards.
  • 2. Re: ZFS question
    Stlouis1 Newbie
    Currently Being Moderated
    thanks for the quick reply.
  • 3. Re: ZFS question
    cindys Pro
    Currently Being Moderated
    Thanks for considering a redundant ZFS storage pool (you'll be glad you did).

    A few more things to think about:

    1. Although a RAIDZ pool maximizes space, this pool configuration is best for large I/Os like
    streaming video. Mirrored pools are better for small reads and writes. If you are looking for
    best performance, you should test this config with your workload.

    2. Consider that a separate log device is best for improving NFS synchronous write performance.

    3. Consider that a separate cache device is best for improving read performance.

    http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc

    Thanks, Cindy
  • 4. Re: ZFS question
    Stlouis1 Newbie
    Currently Being Moderated
    I've have been slowly going over a lot of the documentation.

    I will be adding 1-2 SSD's for caching, i may use one as a log device, i'm not sure yet. I only have 6 hotswap bays in this rack unit, so running separate mirrored pools for the shares with smaller files to improve performance is not going to happen. I'm limited by my network bandwidth anyhow so I'm not too worried about it. I went with what I thought was the most optimal configuration for what I have and I'm happy with that, the main thing was redundant storage, and dual parity was a must after I had a raid5 fail on me last year.

    One thing I've noticed, I have network drives connected to the shares on the solaris box from my windows machine, for some reason, Windows is reporting the drives to be 6.57 tb when on solaris it reports as 7.1tb

    It was reporting as 7.1tb on Windows before. the only thing thats changed is i've created a couple sub file systems on the pool. is this normal? or is windows just reporting incorrectly?
  • 5. Re: ZFS question
    cindys Pro
    Currently Being Moderated
    If you're okay with RAIDZ2 performance then that's fine. This is a large pool (7.1 TBs?) so
    be sure that you have good backups and you are monitoring for disk failures and other
    problems

    Regarding the space discrepancy between Windows and Solaris, are you looking at the
    7.1 TB with zpool list or zfs list?

    Thanks, Cindy
  • 6. Re: ZFS question
    Stlouis1 Newbie
    Currently Being Moderated
    i was looking at df -h, but here's the other two. still new to me


    solaris@SRV-DATA:/tank/users$ zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    rpool 232G 10.9G 221G 4% 1.00x ONLINE -
    tank 10.9T 3.62T 7.26T 33% 1.00x ONLINE -

    solaris@SRV-DATA:/tank/users$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 11.1G 217G 4.90M /rpool
    rpool/ROOT 4.54G 217G 31K legacy
    rpool/ROOT/solaris 4.54G 217G 4.03G /
    rpool/ROOT/solaris-backup-1 108K 217G 3.68G /
    rpool/ROOT/solaris-backup-1/var 42K 217G 202M /var
    rpool/ROOT/solaris/var 296M 217G 203M /var
    rpool/VARSHARE 264K 217G 264K /var/share
    rpool/dump 4.13G 217G 4.00G -
    rpool/export 377M 217G 32K /export
    rpool/export/home 377M 217G 32K /export/home
    rpool/export/home/solaris 377M 217G 377M /export/home/solaris
    rpool/swap 2.06G 217G 2.00G -
    tank 2.41T 4.72T 1.43T /tank
    tank/movies 67.9K 4.72T 67.9K /tank/movies
    tank/music 67.9K 750G 67.9K /tank/music
    tank/shared 232G 268G 232G /tank/shared
    tank/stuff 333G 167G 333G /tank/stuff
    tank/users 438G 4.72T 114G /tank/users
    tank/users/* 6.52G 18.5G 6.52G /tank/users/*
    tank/users/* 1.18G 23.8G 1.18G /tank/users/*


    solaris@SRV-DATA:/tank/users$ df -h
    Filesystem Size Used Available Capacity Mounted on
    rpool/ROOT/solaris 228G 4.0G 217G 2% /
    /devices 0K 0K 0K 0% /devices
    /dev 0K 0K 0K 0% /dev
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.5G 1.5M 1.5G 1% /system/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    221G 4.0G 217G 2% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    rpool/ROOT/solaris/var
    228G 203M 217G 1% /var
    swap 1.6G 128M 1.5G 8% /tmp
    rpool/VARSHARE 228G 267K 217G 1% /var/share
    rpool/export 228G 32K 217G 1% /export
    rpool/export/home 228G 32K 217G 1% /export/home
    rpool/export/home/solaris
    228G 377M 217G 1% /export/home/solaris
    rpool 228G 4.9M 217G 1% /rpool
    tank 7.1T 1.4T 4.7T 24% /tank
    tank/music 750G 68K 750G 1% /tank/music
    tank/shared 500G 232G 268G 47% /tank/shared
    /export/home/solaris 218G 377M 217G 1% /home/solaris
    tank/stuff 500G 333G 167G 67% /tank/stuff
    tank/users 7.1T 129G 4.7T 3% /tank/users
    tank/users/* 25G 6.5G 18G 27% /tank/users/*
    tank/users/* 25G 1.2G 24G 5% /tank/users/*
    tank/movies 7.1T 67K 4.7T 1% /tank/movies
  • 7. Re: ZFS question
    cindys Pro
    Currently Being Moderated
    My best guess is that there is going to be space discrepancies between Windows and ZFS.
    I don't know because I never use Windows but for legacy commands like du and df, they
    are unaware of descendent ZFS file systems and so on.

    Let's get back to the RAIDZ space issue though. Some amount of pool space will be consumed
    by RAIDZ parity so what is available at the pool level will be different (and smaller) than
    what it available to the pool's file systems.

    I have a good explanation here:

    http://docs.oracle.com/cd/E26502_01/html/E29007/gbbti.html#scrolltoc

    ZFS Storage Pool Space Reporting

    See the RAIDZ storage pool bullet. The raw pool size is 408 GB but the space
    that is available to the file systems is 133 GB.

    You should also review the ZFS best practices section, here:

    http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc

    Thanks, Cindy
  • 8. Re: ZFS question
    Stlouis1 Newbie
    Currently Being Moderated
    well, i understand that some will be used for the parity. if you look at the details I posted above, there's 10.8T when you look at zpool info, but only 7.1T when you look at du, this makes sense because of the parity and formatting

    but now it seems to have shrunk...significantly, I did set quotas on a couple slices, but i was under the impression those wouldn't affect it. either way, im sure it's working the way its supposed to, i'm not too concerned. it just seemed odd that it seems to have shrunk


    edit//
    actually, im looking at it now, i see why

    its because the child systems are shared, and they're reporting the free space on the parent pool as the disk size, it's odd, but it is correct when i look at the numbers

    Edited by: Stlouis1 on Dec 13, 2012 2:59 PM

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points