This article, I think addresses your issue.
Basically you should use ulimit command to increase the available resources.
For example:
Use the following command command to display maximum number of open file descriptors:
cat /proc/sys/fs/file-max
To see the hard and soft values, issue the command as follows:
# ulimit -Hn
# ulimit -Sn
To see the hard and soft values for httpd or oracle user, issue the command as follows:
# su - username
To fix the number of maximum files, you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
# sysctl -w fs.file-max=100000
Above command forces the limit to 100000 files. You need to edit /etc/sysctl.conf
file and put following line so that after reboot the setting will remain as it is. To do that, append a config directive as follows:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p
Verify your settings with command:
# cat /proc/sys/fs/file-max
or:
# sysctl fs.file-max
The above procedure sets system-wide file descriptors (FD) limits, however you can limit httpd
(or any other users) user to specific limits by editing /etc/security/limits.conf
file by editing /etc/security/limits.conf
and set the limits as follows:
httpd soft nofile 4096
httpd hard nofile 10240
Then check them by:
# su - httpd
$ ulimit -Hn
$ ulimit -Sn
If you've got the problem on other Linux distributions, check the /etc/pam.d/login
and make sure you've got pam_limits.so
enabled, e.g.
session required pam_limits.so