Fixing the "Too many open files" Error in Linux

On Linux servers under heavy load, the "too many open files" error occurs frequently. This error indicates that a process cannot open new files (file descriptors) because it has reached the system-imposed limit. Linux sets a default maximum open file limit for each process and user, and these default settings are often modest for production workloads.

The number of concurrent file descriptors that users and processes can open is constrained by system limits. When a user or process attempts to open more file descriptors than allowed, the "Too many open files" error appears. The solution involves increasing the maximum number of file descriptors a user or process can open.

File Descriptors

A file descriptor is an unsigned integer used by a process to identify an open file. It serves as an index into the kernel's file descriptor table maintained for each process. File descriptors are created by system calls like open, pipe, creat, and fcntl.

The /OPEN_MAX constant in sys/limits.h sets a limit on file descriptors per process. Most file descriptors are process-specific, but they can be shared through inheritance when child processes are created via fork, or copied using fcntl, dup, and dup2 system calls.

File Descriptor Limits

Linux implements two types of limits for file descriptors

  • Soft limit Can be adjusted by any user but cannot exceed the hard limit

  • Hard limit Can be lowered by unprivileged users but only raised by privileged users (root)

Checking Current Limits

To check the current file descriptor limit for the session

$ ulimit -n
1024

To check the soft and hard limits separately

$ ulimit -Sn    # Soft limit
$ ulimit -Hn    # Hard limit
1024
4096

Checking Current Usage

To see how many files are currently open system-wide

$ lsof | wc -l
363869

To check open files for a specific process

$ lsof -p <process_id> | wc -l

Fixing the "Too Many Open Files" Error

Temporary Fix (Current Session)

Increase the limit for the current session using ulimit

$ ulimit -n 4096
$ ulimit -n
4096

Note: This change only affects the current session and will reset after logout.

Permanent Fix (System-wide)

For a permanent solution, edit the /etc/security/limits.conf file

$ sudo nano /etc/security/limits.conf

Add the following lines at the end

*         hard    nofile       500000
*         soft    nofile       500000
root      hard    nofile       500000
root      soft    nofile       500000
Field Description
* Applies to all users
hard/soft Type of limit
nofile Number of open files limit
500000 New limit value

Additional System Configuration

For very high limits, you may also need to modify system-wide settings

$ echo 'fs.file-max = 2097152' | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p

After making these changes, users need to log out and log back in for the new limits to take effect.

Verification

After applying the changes, verify the new limits

$ ulimit -n
$ cat /proc/sys/fs/file-max

Conclusion

The "Too many open files" error is a common Linux issue that can be resolved by increasing file descriptor limits. Use ulimit for temporary fixes and modify /etc/security/limits.conf for permanent system-wide changes. Proper monitoring of file descriptor usage helps prevent this error in production environments.

Updated on: 2026-03-17T09:01:38+05:30

28K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements