- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Fixing the "Too many open files" Error in Linux
Abstract
On Linux servers under heavy load, "too many open files" problems occur frequently. It denotes that a process is unable to open new files (file descriptors) because it has opened too many already. The "max open file limit" is predetermined by default for each process or user on Linux, and the settings are modest.
The number of concurrent files descriptor users and processes is constrained. The "Too many open files error" appears when the limit is reached when the user or process tries to open more file descriptors.
Therefore, increasing the maximum number of file descriptors a user or process can open is the fix for this problem.
Note − Linux commands are case-sensitive.
File Descriptor
An unsigned integer used by a process to identify an open file is called a file descriptor.
The /OPEN MAX control included in the sys/limits.h file sets a limit on the number of file descriptors a process may use. The ulimit -n parameter also controls the number of file descriptors. The subroutines open, pipe, creat, and fcntl all produce file descriptors. The majority of the time, file descriptors are specific to each process. However, they can occasionally be shared by offspring processes created by the fork subroutine or copied by the fcntl, dup, and dup2 subroutines.
File descriptors are indices to the file descriptor database that the kernel keeps for each process in u block area. The open and create actions as well as an inheritance from a parent process are the most typical methods for processes to acquire file descriptors. When a fork operation takes place, the descriptor table is duplicated for the child process, giving it equal access to the files the parent process uses.
File Descriptor Limits
There are restrictions on how many file descriptors a process can have open at once. One is a soft limit, which can never go above the hard limit and can be adjusted by any non-privileged user. An unprivileged user can lower the hard limit but not raise it again, whereas a privileged user such as root can raise and lower it as required.
Example
Run the command indicated below to verify the current session's descriptor files limit.
$ ulimit -n
Output
1024
The upper limit is 1024, as seen above.
Example
Now let's run the following command to check the user limit −
$ ulimit -u
Output
31211
Example
Use the command as shown below to find out how many files are currently open,
$ lsof | wc -l
Output
363869
Example
To check the soft limit and the hard limit for the current session, respectively, we use the ulimit command with the -Sn and -Hn flags −
$ ulimit -Sn
Output
1024
$ ulimit -Hn
Output
4096
Increasing File Descriptor Limits
Let's try using the ulimit -n command to set the limit for the current session −
Example
$ ulimit -n 4096 $ ulimit -n
Output
4096
Adding a few changes to the /etc/security/limits.conf file and logging back in will allow us to modify the soft and hard limits for all processes globally −
* hard nofile 500000 * soft nofile 500000 root hard nofile 500000 root soft nofile 500000
Conclusion
In this tutorial, we learned some examples of how to fix “Too many open files” error in Linux. Any Linux user can quickly repair this problem. Feel free to attempt any method offered by Linux to solve this issue in order to eliminate this error. The procedures outlined above work with the various Linux distributions. All operating systems must have file descriptors as a fundamental element.
I hope you find these examples of the commands useful and will be helpful in exploring Linux