Actually netstat is showing many ( 17k + this time. in previous run it was even higher than this ) connections like this:
\*.\* \*.\* 0 0 49152 0 IDLE
The main problem is that my app is running sort of file_Descriptors which I set explicitly to a much higher value using ulimit from the script which starts my app.
And I see lot of IDLE connection like above. I tried investigating if there is a leak in file descriptor we are using( mainly through sockets. and a few through files). I tried various scenarios where my app interacts with other serversocket and other server connect to my app(it also has serversocket) but could not reproduce the case where the TCP connection can go to a IDLE state.
The TCP related tuning parameter seems to be in defaults.
I know that IDLE means opened but not bound.
One possibility is for some reason it takes longer to establish an incoming connection and hence the incoming connections
Another possibility is that the bind actually failed.
I am not sure how to debug and find out mainly:
-The cause of so many IDLE connection
- Why is my app running sort of File_descriptor even though it has been set to max
Any input on this would really be helpful.
Here is info/notes on TCP Idle state that I collected.
Idle, opened but not bound.
IDLE socket state is after a socket is created but before it is bound.
the IDLE state is before the bind call, so it has no ports associated with it.
TCP endpoints are in IDLE state when they are first created. You would
then normally call bind->listen (server) or [bind]->connect if a client.