6 Replies Latest reply: Jul 12, 2007 1:41 PM by 538022 RSS

    MAX_DUMP_FILE_SIZE is not be used

    538022
      Hello guys,
      a little question about the parameter MAX_DUMP_FILE_SIZE and its use.

      At first my database is running with the following setting:
      Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
      With the Partitioning and Data Mining options

      SQL> show parameter max_dump
      NAME TYPE VALUE
      ------------------------------------ ----------- ------------------------------
      max_dump_file_size string 20000

      Parameter documentation of oracle:
      http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams116.htm#sthref490

      But yesterday a trace file was generated by oracle with round about 10 megabytes.
      It was the trace of the mmnl process and in the end of the trace the following line was appended:
      *** DUMP FILE SIZE IS LIMITED TO 10240000 BYTES ***

      So what? I set max_dump_file_size not to 10 megabyte... how and when does oracle use this parameter and where is the limit size of 10 megabyte specified?

      Thanks and Regards
      Stefan
        • 1. Re: MAX_DUMP_FILE_SIZE is not be used
          Nicolas.Gasparotto
          10240000 / 20000 = 512
          This is your OS blocksize.
          Nothing wrong here, MAX_DUMP_FILE_SIZE is the max number of OS block.

          Nicolas.
          • 2. Re: MAX_DUMP_FILE_SIZE is not be used
            473071
            Hi Stephan,
            If you set max_dump_file_size=10000. without any suffix(M or K)
            This represents 10000 x block_size of ur server, e.g block_size for solaris=512bytes
            HPUX=1024bytes, Windows/nt=512bytes.
            If u really wanted to specify 10mb..
            Then use the command below:
            ALTER SYSTEM SET MAX_DUMP_FILE_SIZE='10M' scope=both;
            Cheers!
            • 3. Re: MAX_DUMP_FILE_SIZE is not be used
              538022
              Not exactly:

              The database is running on AIX and the jfs2 filesystem block size is 4096 bytes (block size: 4096).

              4096 bytes = 4 kb
              4 kb * 20000 = 80000 kb (=78MB)

              and not 10MB.... there is something wrong...

              We use the default block size of AIX JFS2 filesystems:
              http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds3/mkfs.htm
              The following options are specific to the Enhanced Journaled File System (JFS2):
              The default block size is 4096 bytes
              Thanks and Regards
              Stefan
              • 4. Re: MAX_DUMP_FILE_SIZE is not be used
                511057
                Hi Stephan:
                I got the same error as your. my max_dump_file_size is 10240 and i got the same message of *** DUMP FILE SIZE IS LIMITED TO 5242880 BYTES ***. I have AIX 5.3L aslo and my /oracleapps fs was created by jfs2, so its block is 4096 (you can check it in smit storage). So did you find out what went wrong on your system? There is a huge trc dump in the udump directory since I began my rman backup process. Did you solve it yet?
                • 5. Re: MAX_DUMP_FILE_SIZE is not be used
                  247514
                  Consult Oracle Document if you are not sure,
                  Report Error if you believe it's wrong Or you are wrong.

                  You can set the size limit to kilobytes or megabytes instead of OS block number by putting K or M.

                  http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams116.htm#sthref490

                  MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert file). Change this limit if you are concerned that trace files may use too much space.

                  A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.

                  A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.

                  The special value string UNLIMITED means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.
                  • 6. Re: MAX_DUMP_FILE_SIZE is not be used
                    538022
                    Hi CIATECPCV,
                    if you dont specify a unit like M or K it uses the block size.

                    I have tested something around with the value and i got the following explanation. Oracle does not interpret the blocksize of the JFS2 filesystem as bytes - it seems like oracle is intepreting it as bits.

                    In my example i can calculate it to the correct dump size limit:
                    max_dump_file_size => 20000
                    And the limit of the dump file was by 10240000 bytes. So lets calculate
                    20000 x 4096 bits = 81920000 bits => 10240000 bytes :-)

                    and in your case:
                    10240 x 4096 bits = 41943040 bits => 5242880 bytes

                    As you can see if your are running oracle on AIX, it interprets the filesystem block size as bits - not as bytes as you created it or as AIX uses it. Some kind of weird but there is the explanation :)

                    I hope this helps...

                    Regards
                    Stefan