Forum Stats

  • 3,733,521 Users
  • 2,246,780 Discussions


root fills up to 100% in seconds

Vunga Member Posts: 25 Blue Ribbon
edited July 2018 in Oracle SuperCluster

Hi Guys

I need you assistance

My root file system fills up to 100% in a few seconds after either deleting files or increasing the file system

This is on a zone T5-8 supercluster.

Filesystem             Size   Used  Available Capacity  Mounted on

rpool/ROOT/solaris-9    80G    65G         0K   100%    /


                        80G   290M         0K   100%    /var

/dev                     0K     0K         0K     0%    /dev

/oracle                 87G    67G        20G    78%    /oracle

proc                     0K     0K         0K     0%    /proc

ctfs                     0K     0K         0K     0%    /system/contract

mnttab                   0K     0K         0K     0%    /etc/mnttab

objfs                    0K     0K         0K     0%    /system/object

swap                   161G   680K       161G     1%    /system/volatile

sharefs                  0K     0K         0K     0%    /etc/dfs/sharetab

fd                       0K     0K         0K     0%    /dev/fd

swap                   161G   133M       161G     1%    /tmp

rpool/VARSHARE          80G   1.2M         0K   100%    /var/share

rpool/export            80G    32K         0K   100%    /export

rpool/export/home       80G    35K         0K   100%    /export/home


                        80G   344K         0K   100%    /export/home/grid


                        80G    11G         0K   100%    /export/home/oracle


                        80G   331K         0K   100%    /export/home/orarom

rpool                   80G    31K         0K   100%    /rpool

rpool/VARSHARE/pkg      80G    32K         0K   100%    /var/share/pkg


                        80G    31K         0K   100%    /var/share/pkg/repositories

I have exawatcher running and is installed in /opt which also keeps increasing.

How can I tell which process is doing this and how best can I stop this and clear space on my node.




  • Nik
    Nik Member Posts: 2,775 Bronze Crown
    edited July 2018


    You have many FS on pool rpool.

    pool/ROOT/solaris-9    80G    65G        0K  100%    /rpool/ROOT/solaris-9/var                        80G  290M        0K  100%    /var/oracle                87G    67G        20G    78%    /oraclerpool/VARSHARE          80G  1.2M        0K  100%    /var/sharerpool/export            80G    32K        0K  100%    /exportrpool/export/home      80G    35K        0K  100%    /export/homerpool/export/home/grid                        80G  344K        0K  100%    /export/home/gridrpool/export/home/oracle                        80G    11G        0K  100%    /export/home/oraclerpool/export/home/orarom                        80G  331K        0K  100%    /export/home/oraromrpool                  80G    31K        0K  100%    /rpoolrpool/VARSHARE/pkg      80G    32K        0K  100%    /var/share/pkgrpool/VARSHARE/pkg/repositories                        80G    31K        0K  100%    /var/share/pkg/repositories

    Activity on any of this FS can cause this problem.

    So use command df -k and save output.

    remove some files  on any of this FS and check df -k  again.

    You can find whar FS realy grow.

    FS can not realy release  space In case you have snapshots.

    You can use du for calcuate  used space for every dir.

    You can use dtrace  script for monitor FS activity ( iosnoop ; opensnoop)



  • Jens
    Jens Member Posts: 91 Blue Ribbon
    edited July 2018

    You might try to hunt down the process using dtrace as suggested by Nik, if permitted.

    I'd try with

    dtrace -qn 'fsinfo:::write {printf("  %-16s %6d %8.8s  %-10s %-16s %s\n", execname, pid, probename, args[0]->fi_fs,args[0]->fi_mount, args[0]->fi_pathname);}'

    as a starter.

    It should give the name of the executable, its pid, the type of call/access (here limited to write), the fs-type (e.g. ufs, zfs, sockfs,...) and mountpoint as well as the path of file, e.g.

    syslogd        687write  zfs   /var        /var/...

    You might want to filter out sockfs and perhaps have it limited to collect for a second

    dtrace -c 'sleep 1' -qn ...  |grep -v sockfs

Sign In or Register to comment.