This content has been marked as final. Show 53 replies
This coincidentally is why I posted this question in the first place. It seems to me that if the RAM is going to get allocated anyway whether I'm using it or not, it's not all that useful to be able to split these two parameters. Why not just default sga_target to sga_max_size and then let me specify minimums for the SGA substructures as needed? It's not like setting a lower SGA_TARGET is going to save OS RAM right?
The only reason I could think of is maybe I want SGA_TARGET smaller so my buffer cache, etc., are smaller so that I can save some clock cycles on buffer scans, or some such. In this day and age of fast CPU-RAM interconnects and lots of compute power, I'm not sure if we still need to care about doing that, either...
As I stated originally, my understanding was that the memory the difference provided by
(SGA_MAX_SIZE - (SGA_TARGET + SGA_FIXED_SIZE + manually set caches) )
was not to be allocated. And if TARGET was reduced, those sizes were to be returned to the OS to be made available for other SHMEM useage, perhaps by other instances.
Under this scheme introduced in 10g, the FIXED memory area, which contains the base linked lists to the various pools, was to be allocated to allow growth to the max size.
I seem to stand corrected. (although OS independance has not been established)
Metalink notes related, that I'm still digesting, include
and Note:295626.1 which specifically is relevant to this discussion.
Basically that leaves 2 SGA_TARGET settings - 0, and 'nearly' SGA_MAX_SIZE' (to accomodate overheads).
It depends on your OS.
If you have an OS that takes SGA_MAX_SIZE from real, physical memory and leaves it there, then there isn't a lot of point in having SMS<ST.
But not all OSes are Windows or Linux: on Solaris, for example, the unused difference between SMS and ST can be virtualised out of existence until it's needed. If something else needs it in the meantime, then it can use that memory as if it were unallocated. Meanwhile, you've gained yourself the ability to grow your SGA beyond its normal limits when needed, too.
Thanks Howard. Apparently I am not entirely out to lunch. I knew I'd read, and discussed, the topic the way I presented it ... just did not remember the assumptions.
As you stated, one assumption is that we are using an OS that actually knows how to disregard/virtualize SHM segments that are not in use. Another feather in the Solaris cap. <g>
Message was edited by: Hans Forbrich
Note to self - check whether SHM is allocated in chunks to the processes, something like Oracle's Granules. If so, under Linux, is there an IPC settings combination that allows a process to release a chunk back to OS, or is it an all-or-nothing situation based on the Shmem handle.
Further note to self - I've had my head way to far up the Linux and Windows path lately. It is time to refresh on the big guys
That mean our mighty Tom was wrong in this respect.Perhaps that means that Hans took mighty Tom's comments out of context, perhaps not using all of the information provided or reading between the lines.
Let's do an extreme test, set an 8G SGA_MAX_SIZE on my server only have 2G memory. (don't try this at home)Perhaps it means that the incredible yingkuan makes certain assumptions about the operation of shared memory and SGA across all operating systems - assumptions that are are not universal. (I know Hans certainly made this mistake!)
Which operating systems did you test? Please state your assumptions. <g>
I feel I need to make few notes before proceed.
1. Even mighty person make mistakes, making mistake doesn't make him unworthy.
2. Pointing out mighty person's mistake doesn't automatically make you mighty or "incredible"
3. I was just joking in that post.
4. At least I did my test and speaking/sharing my expereience based on my observation. However incomplete or wrong it is.
Perhaps it means that the incredible yingkuan makesNot sure where the Satiric coming from, I was simply making same assumption as you did. By calling me 'incredible', so that making you 'incredible' as well? That make two of us.
certain assumptions about the operation of shared
memory and SGA across all operating systems -
assumptions that are are not universal. (I know Hans
certainly made this mistake!)
I was testing on Solaris, my testing on Red Hat Linux show different result.
Not sure where the Satiric coming from, I was simplyYou have provided a lot of useful information in your posts, and I do find I have learned a lot from them. Your posts are always on my 'to read' list.
making same assumption as you did. By calling me
Message was edited by: Hans Forbrich
The satire came from responding a 3 AM ... after being awakened by a cat jumping on me. Unfortunate, perhaps.
Wait a second, I made a mistake in my Linux testing.
I didn't set sga_max_size explicitly in my spfile file, so it automatically equaled with sga_target.
After I set it to say 900M and leave sga_target as 600M, the result suport my previous observation.
Oracle will allocate amount of memory equal to SGA_MAX_SIZE from OS.
-- Assuming you set it explicitly in spfile
SYS@azdev > show sga
Total System Global Area 943718400 bytes
Fixed Size 2077264 bytes
Variable Size 490737072 bytes
Database Buffers 444596224 bytes
Redo Buffers 6307840 bytes
SYS@azdev > show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 900M
sga_target big integer 600M
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0xc5b3cb58 6946840 oracle 640 945815552 22
Linux 2.6.9-42.0.8.ELsmp #1 SMP Tue Jan 23 12:49:51 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
I admit making premature assumption is one of my bad habit, it's perticularly bad for a DBA. Thanks for point that out.
Did you find any OS that will not preallocate SGA_MAX_SIZE?
I only get to test Solaris and Linux, our HP is production so no good for testing.
I don't have any Windows installation apparently.
Hi Hans,Based on my current research, Solaris (and perhaps AIX) may be configured so that when pages are not used, the pre-allocation does not use physical memory.
I admit making premature assumption is one of my bad
habit, it's perticularly bad for a DBA. Thanks for
point that out.
Did you find any OS that will not preallocate
Therefore under Solaris 10 - and I still need to verify this myself - the real memory requirement will be managed by SGA_TARGET + SGA fixed overhead + SGA manual sub-pools.
This definitely does NOT work under Linux and Windows.
Message was edited by: Hans Forbrich
Please holder for various sites I find during the investigation
I think this is critical to the discussion: http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b15658/appe_sol.htm
This same discussion elsewhere: http://www.webservertalk.com/archive149-2004-8-333787.html
Current conclusion (not verified as I currently do not have available test systems):
SGA_MAX_SIZE will be allocated from the Shared Memory on all operating systems. This will apparently always show up under the ipcs size.
However, some operating systems allow the Shared Memory to be unlocked, or not pinned in real memory and handled in chunks or pages. Under these situations, the unused amount may be either "not physically allocated" or "placed on swap device" - eg: DISM in Solaris. In these cases, the unused amount does not use RAM.
So when [(SGA_TARGET + fixed SGA + manual SGA (keep, recycle pool, etc.) + manual minimums for automatic cache) < SGA_MAX_SIZE], it is possible that physical RAM used may be less than SGA_MAX_SIZE, even though SGA_MAX_SIZE is actually allocated to IPC.
Therefore, on DISM environments, SGA_TARGET could theoretically be used to reduce physical RAM requirements.
Is this good or bad for performance? ... I am not even going there without a significant test box.