This content has been marked as final. Show 20 replies
Define high? You should use connection pooling of some kind to prevent excessive logging in and out from the application. There is an overhead associated with a new database connection so you should try to avoid making too many if the connections are going to be used frequently. This is what connection pooling is for.
So to answer your question, yes, there will be a problem if you make too many - poorer performance than could be expected if you pooled connections.
It depend on many things, are you using dedicated process, MTS, connection pools, SCAN...
Can you give us more information, number of connections by second.
How you know that ? Proof it , Did you check V$session .
A quick look in the listener log file will show you if you have a large number of connections being made to the DB.
thanks for Reply
The application generates 2500 new sessions in a short burst every 15 minutes . Is that serious.
What you using to connect Application and database
It's sure not something I'd recommend, 'serious' or not...
please check below in unix..
ps -ef | grep -i "local=no"|wc -l
check the count and kill old session if not needed...
My Blog: http://shishirtekade.blogspot.com
There are some OS Level limitation that how much process you can allocated.
Although each new process had allocate it's separate pga so it's also allocated memory
You could consider the below mention points:-
1.) USE the Connection polling IN Application for avoid excessive new sessions
2.) Basically In Oracle process is managed as Dedicated Server Processes by default
could you please check that your apps support the Shared Server Processes architecture?
Please refer the detail architecture
3.) Make sure that you use the bind variable in application side.
Please monitor the app using the Oracle Enterprise Manager
Launching 2500 sessions in 15 minutes is not too bad, less than 3 per second. Oracle will have no problem with that. But it will be putting some strain on the operating system if it is having to launch a dedicated server process for each session. I don't think you mentioned what OS you are on, but Unix is pretty bad at launching processes quickly.
Overall, I wouldn't have thought that this level of activity is likely to be critical, but I would look at using connection pooling of some kind or shared server, just to take the stress off the OS.
hitgon wrote:Er.... have you thought this through? And mentioned it to your end users? Just because some guy is not running a SQL right now, doesn't mean that he didn't run one in the last secind and might want to run one in the next second.
You can use below script for generate the commands for kill the sessions which are not active
#Script for kill the inactive sessions of the specified schema
SET PAGESIZE 1000
SET HEAD OFF;
SELECT 'ALTER SYSTEM KILL SESSION ' ||chr(39) ||s.sid||chr(44)||s.serial#||chr(39)||';'
FROM v$session s
JOIN v$process p ON p.addr = s.paddr
WHERE s.type != 'BACKGROUND'
AND STATUS != 'ACTIVE'
AND s.username = 'SCOTT';
This is just for help them to handle the situation in case there are lots of sessions that r already created but inactive state
because we are in monitoring stage to troubleshoot the app behavior in oracle side
We run Oracle on a Windows platform and we've seen up to a max of 50 connections a second. That's the limit. At this stage things are not good.
As JohnWatson pointed out, the DB and OS can probably handle it without a problem but that doesn't mean it should be left. YOu shoud try to minimise overhead from all processes before they become critical. Just because it's not a problem now, doesn't mean it won't be in the future. Using pooling makes the application more scalable. Without it, an increase in connections being opened and closed will cause performance issues at some point. Better to sort it out now rather than later.
Why are the App servers making 2,500 connections in 15 mins? Do the connections the disconnect and it happens again?