8 Replies Latest reply: Oct 18, 2012 12:31 PM by Cindys-Oracle RSS

    Little problem with zpool and file system SOLARIS 10

    969174
      Hallo everybody,
      I'm not an solaris expert but I'd fix a problem.
      Someone made zpool and now if I submit the command 'zpool list' I get:

      NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
      production_1 19.0T 4.39T 14.6T 23% ONLINE -

      but if I submit 'df' I get

      Filesystem size used avail capacity Mounted on

      [...]
      production_1 12T 2.9T 9.6T 24% /oamid_repository
      rpool 457G 33K 441G 1% /rpool

      So it seems I have just 12 T of free space but I expected to have 19 T.
      Now how could I fix the problem? My goal want to have 19 T on production_1 file system.

      I've already searched a solution online, but had no luck.

      Thank you

      Francesco
        • 1. Re: Little problem with zpool and file system SOLARIS 10
          Nik
          Hi.
          df can show only mounted file systems, but you can have unmouted dataset or snapshots on pool production_1.
          Also it possible set quota limitation on dataset.

          For get list of all zfs dataset:

          zfs list | grep production_1

          for get all settings for dataset:

          zfs get all production_1


          I think that you have snapshots.

          Regards.
          • 2. Re: Little problem with zpool and file system SOLARIS 10
            969174
            Well, thanks for reply but I cannot figure out the problem...
            I'll paste the output of the commands...I hope you can help me anyway, thanks in advance...


            _______________________________________________________________
            bash-3.00# zfs list | grep production_1
            production_1 2.93T 9.54T 2.93T /oamid_repository
            bash-3.00#


            _________________________________________________________________


            bash-3.00# zfs get all | grep production_1

            production_1 type filesystem -
            production_1 creation Wed May 23 17:44 2012 -
            production_1 used 2.93T -
            production_1 available 9.54T -
            production_1 referenced 2.93T -
            production_1 compressratio 1.00x -
            production_1 mounted yes -
            production_1 quota none default
            production_1 reservation none default
            production_1 recordsize 128K default
            production_1 mountpoint /oamid_repository default
            production_1 sharenfs off default
            production_1 checksum on default
            production_1 compression off default
            production_1 atime on default
            production_1 devices on default
            production_1 exec on default
            production_1 setuid on default
            production_1 readonly off default
            production_1 zoned off default
            production_1 snapdir hidden default
            production_1 aclmode groupmask default
            production_1 aclinherit restricted default
            production_1 canmount on default
            production_1 shareiscsi off default
            production_1 xattr on default
            production_1 copies 1 default
            production_1 version 4 -
            production_1 utf8only off -
            production_1 normalization none -
            production_1 casesensitivity sensitive -
            production_1 vscan off default
            production_1 nbmand off default
            production_1 sharesmb off default
            production_1 refquota none default
            production_1 refreservation none default
            production_1 primarycache all default
            production_1 secondarycache all default
            production_1 usedbysnapshots 0 -
            production_1 usedbydataset 2.93T -
            production_1 usedbychildren 21.1M -
            production_1 usedbyrefreservation 0 -
            production_1 logbias latency default
            bash-3.00#

            _________________________________________________________________________________________
            • 3. Re: Little problem with zpool and file system SOLARIS 10
              Nik
              Hi.
              Strange result....

              please, show result of
              zpool show all production_1

              Regards.
              • 4. Re: Little problem with zpool and file system SOLARIS 10
                969174
                Thanks for replying.
                I got an error :-(




                bash-3.00# zpool show all production_1/
                unrecognized command 'show'
                usage: zpool command args ...
                where 'command' is one of the following:

                create [-fn] [-o property=value] ...
                [-O file-system-property=value] ...
                [-m mountpoint] [-R root] <pool> <vdev> ...
                destroy [-f] <pool>

                add [-fn] <pool> <vdev> ...
                remove <pool> <device> ...

                list [-H] [-o property[,...]] [pool] ...
                iostat [-v] [pool] ... [interval [count]]
                status [-vx] [pool] ...

                online <pool> <device> ...
                offline [-t] <pool> <device> ...
                clear [-nF] <pool> [device]

                attach [-f] <pool> <device> <new-device>
                detach <pool> <device>
                replace [-f] <pool> <device> [new-device]
                split [-n] [-R altroot] [-o mntopts]
                [-o property=value] <pool> <newpool> [<device> ...]

                scrub [-s] <pool> ...

                import [-d dir] [-D]
                import [-d dir | -c cachefile] [-n] -F <pool | id>
                import [-o mntopts] [-o property=value] ...
                [-d dir | -c cachefile] [-D] [-f] [-R root] -a
                import [-o mntopts] [-o property=value] ...
                [-d dir | -c cachefile] [-D] [-f] [-R root] <pool | id> [newpool]
                export [-f] <pool> ...
                upgrade
                upgrade -v
                upgrade [-V version] <-a | pool ...>

                history [-il] [<pool>] ...
                get <"all" | property[,...]> <pool> ...
                set <property=value> <pool>
                • 5. Re: Little problem with zpool and file system SOLARIS 10
                  Nik
                  Remove additional "/" after production_1

                  and my mistake.... correct:

                  zpool get all production_1
                  • 6. Re: Little problem with zpool and file system SOLARIS 10
                    969174
                    ok...that's the output:



                    bash-3.00# zpool get all oamid_repository
                    NAME PROPERTY VALUE SOURCE
                    oamid_repository size 19.0T -
                    oamid_repository capacity 23% -
                    oamid_repository altroot - default
                    oamid_repository health ONLINE -
                    oamid_repository guid 9210280157966325533 default
                    oamid_repository version 22 default
                    oamid_repository bootfs - default
                    oamid_repository delegation on default
                    oamid_repository autoreplace off default
                    oamid_repository cachefile - default
                    oamid_repository failmode wait default
                    oamid_repository listsnapshots on default
                    oamid_repository autoexpand off default
                    oamid_repository free 14.6T -
                    oamid_repository allocated 4.43T -
                    bash-3.00#


                    any ideas?
                    • 7. Re: Little problem with zpool and file system SOLARIS 10
                      Cindys-Oracle
                      The amount of pool size that is available for your file systems depends on the type of pool.

                      I would recommend using zpool list and ifs list instead of df or du because these are legacy
                      commands that do not fully account for descendent file systems.

                      See this section of the S11 transition guide that describes the ZFS space accounting depending on
                      the pool type. It doesn't matter that this is Solaris 11 info because it applies to Solaris 10 releases
                      as well.

                      http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-6.html#filesystem-8

                      Let us know if you have more questions.

                      Thanks, Cindy
                      • 8. Re: Little problem with zpool and file system SOLARIS 10
                        Cindys-Oracle
                        The forum editor was kind enough to change z-f-s list to ifs list. Sorry about that. Cndy