This content has been marked as final. Show 11 replies
becks wrote:There is NO direct relationship between amount of RAM & size of the datafiles.
I have a COTS product which uses 32-bit Oracle database. Currently, there 40+GB of datafiles. While monitoring the database server, I realised the number of processes hit>80 which leads to browsing my COTS product's performance deteroriated. People have recommend to increase the PGA and SGA, it seems better.
Just wondering, how much datafiles can a 32-bit database handle? Cause the max memory that i can assign to a 32-bit server is only 4GB
becks wrote:How much disk space is there? A 32bit database refers to using 32bit addressing for memory. Not for accessing data on disk.
Just wondering, how much datafiles can a 32-bit database handle?
File size and disk size limitations are a factor of the file system used. Not of how many bits the CPU uses for addressing RAM.
Cause the max memory that i can assign to a 32-bit server is only 4GBIncorrect. Yes, a 32bit address limits the addressing to 4GB of memory. However, there are kernel memory management extensions that enables one to address more memory. These existed in some form or another even back in the old MS-DOS days via drivers such as himem.sys and emm386.exe. Today this is done via Physical Address Extension (PAE) - a feature implemented by 32bit Intel and AMD processors.
Thanks for the reply.
Understand datafiles size is limited the disk space, was thinking if a 32-bit database can process that amount of datafiles w/o increasing the RAM. I asking this because when the database is first setup, we do not experience any performanace issue. Now we are experiencing the deteroriate and the situation is better after increasing the SGA and PGA. I believe increasing SGA and PGA is related to the RAM of the database server. So was thinking that we have to keep increasing SGA and PGA when the size of datafiles increased again. So thinking that 4GB is limited by 32-bit server, there will be a MAX memory that i can increas the SGA and PGA.
However Billy mention about the PAE, which I can look into to assign more memory to Oracle database. Thanks
becks wrote:What is the actual root cause?
Understand datafiles size is limited the disk space, was thinking if a 32-bit database can process that amount of datafiles w/o increasing the RAM. I asking this because when the database is first setup, we do not experience any performanace issue. Now we are experiencing the deteroriate and the situation is better after increasing the SGA and PGA.
The advice to increase SGA means increasing Oracle's shared memory. The SGA comprises of multiple buffer areas. None is specific to the number of database files.
Ito file I/O, the Buffer Cache can be increased in order to have a larger cache for data in memory. But this is not directly related to the number of database files - it relates to what data is read from disk (database files). A small buffer cache can be a problem with a small number of data files - and not a problem with a large number of data files. The issue is the nature of the I/O being done - and not the number of data files.
As for PGA - that relates to how the PL/SQL engine is used (PL/SQL code design). Not to the number of data files.
I believe increasing SGA and PGA is related to the RAM of the database server.Correct.
So was thinking that we have to keep increasing SGA and PGA when the size of datafiles increased again.As already mentioned - there is no direct relationship between the number of data files and the SGA. PGA relates to the PL/SQL engine (private process memory) - and this too has nothing to do with the number of data files.
So thinking that 4GB is limited by 32-bit server, there will be a MAX memory that i can increas the SGA and PGA.Perhaps you should tell the forum what the actual performance problem is - instead of discussion a solution (increasing the SGA/PGA) that has no direct bearing on the number of data files used by the database.
okie. The root cause is as my first post, the number of process running is >80 in my database server. And i have suggestions on increasing the SGA. My inital thinking was that the high number of process is due to the hugh datafiles size.
I will find out more on the performance of the database again.
Please tell us the patch level of your database. Later patches may favor more PGA usage. Tell us which exact OS, too. Why are you running 32 bit? How much ram do you have?
There are advisors which can be very helpful for figuring out if you are using the right amount of SGA and PGA.
You should also google how to run statspack. If you are licensed for the performance packs, the performance screen in dbconsole can be useful.
The concepts manual is very useful, please review it.
Thanks for all the suggestions. Will be looking into all and hopefully to come out with some conclusions.
I using windows server 2003 standard edition. Why using 32-bit, this is the question that i been asking but this server have been around for some years so do not know the exact reason. I will propose to change it to a 64-bit server, but I believe some migration needs to be done. Any suggestions for that, using export/import utility.
The patch version is 10.2.0.0.0. The RAM currently is 4GB, but i have the available RAM to assign more.
becks wrote:What processes. Job processes? Dedicated server processes? Parallel processing processes? Advance Queue processes? Have you identified these processes?
okie. The root cause is as my first post, the number of process running is >80 in my database server.
And i have suggestions on increasing the SGA.There's only one reason that springs to mind immediately for as increased number of users sessions impacting the SGA and requiring a larger SGA. Shared Servers. The UGA (user session memory on the server) resides in the SGA for shared servers. For dedicated servers, this resides in the private process memory (aka PGA) of the actual server process, and not in shared server memory.
My inital thinking was that the high number of process is due to the hugh datafiles size.Not really... Number of I/O processes (assuming DB writer and slave processes) are dependent on amount of I/O that needs to be done. Not the number of data files.
becks wrote:Old o/s. On Windows, Oracle uses the standard Win32 threading model. So all Oracle database instance "processes" (which would be physical process images on a Linux/Unix kernel), will be a thread in the oracle.exe process image.
I using windows server 2003 standard edition.
The default stack size for a Win32 thread is 1MB (as per Thread Stack Size (Windows)). I've heard mention that the oracle.exe process sets the stack size to 2MB.
Either way - each thread means a memory footprint. 80 user sessions (dedicated server) means 160MB of memory is needed (assuming 2MB stack per thread). That is 4% of available RAM.
So 32bit memory architecture does not scale well in this case.
An alternative is to service that 80 user sessions using (for example) 10 shared server processes. These 10 threads reduce memory footprint. However, can only execute 10 concurrent client requests at a time. Fine for 80 clients issuing small (OLTP type) requests - problematic for 80 clients issuing complex (OLAP type) requests.
Why using 32-bit, this is the question that i been asking but this server have been around for some years so do not know the exact reason.You should also be asking why Windows? And why not Linux?
Personal experience - Linux is not only a lot cheaper ito TCO, it is also a very robust database sever o/s for Oracle.