This content has been marked as final. Show 3 replies
Stacey,1 person found this helpful
Take a look at:
sysctl -a |grep fs.file-max
this setting is the system wide number of all open files including sockets.
Also use 'ulimit -n' to see the limits for the current process.
edit file-max in /etc/sysctl.conf and sysctl -p (requires toot)
or increase the session settngs with: ulimit -n <new setting>
to make this permanent change the profile for the oracle user as described in the installation documentation
you might need to change /etc/security/limits.conf as well if you want to go beyond the setting here
Stacy,1 person found this helpful
Each directory, file, socket a process uses will have a handle to it called a file descriptor. When the process reaches its limit, you get too many open files exception.
As bernhard said, You could increase the descriptors limit using ulimit
Sometimes increasing the descriptors may not really help if the limit is already good enough..like 1024 or so and your process is not really that IO intensive.
I worked for a great boss once and he told me that, when Unix was fairly new they used to have like 8 or 16 file descriptor limit per process
Today you may not be able to run the process with just the 8 or 16. But usually people run into this issue beacuse of program errors.
Sometimes people will be able to mask the program error by increasing the limit.
But it may not work always.. I would suggest you to take a look at all the open files when the process throws this exception.
You can use lsof commmand for this. If you dont have lsof you can take a look at /proc/<PID>/fd directory.
If you end up finding that a particular file/socket is listed a lot, check if you really need it, if not, try to close them.
Thanks Bernhard and Maverick for the wonderful explanation.
I will surely try the things and let you know.