1 Reply Latest reply on Nov 27, 2012 9:21 AM by ChrisJenkins-Oracle

    Setting the linux shared memory size

      I am a little bit confused about the kernel parameters, can you please share your opinions about this one please.
      The shared memory for the tt in the linux kernel is computed as below:
      shared mem = perm size + temp size + log size + 7 MB overhead

      but when i look at the linux sysctl.conf i saw that there is a big value for the share memory setting(default) like ~68 GB, then i think that i should not override/lower this value. What is your opinion? Setting the value as default (68 GB) makes my DB perform bad ? Or setting it to the computed value as in the equation makes it much faster ?

      # Controls the maximum shared segment size, in bytes
      kernel.shmmax = 68719476736
      kernel.sem = 250 32000 100 100

      # Controls the maximum number of shared memory segments, in pages
      kernel.shmall = 4294967296

      Thanks alot.
        • 1. Re: Setting the linux shared memory size
          The shmmax kernel parameter simply sets a limit on the maximum size of an individual shared memory segment. It has no direct effect on performance. In general it needs to be set large enough to allow for the largest shared memory segment that you need to create but smaller then the amount of physical memory in the machine. The shmall parameter sets a system wide limit on the total amount of shared memory (all active segments added together) that can be allocated. This should also, in general, be less then the [physical memory on the machine.

          If either is set to more than the memory in the machine this is not an immediate issue. However in this case it is possible to create shared memory that exceeds the physical memory of the system (as long as adequate swap space is configured) and if this occurs then overall system performance will be impacted, probably severely.

          So, work out what you need and set them accordingly.