Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Linux per-process resource limits - a deep Red Hat Mystery
Per-process resource limits in Linux are constraints that prevent individual processes from consuming excessive system resources like CPU time, memory, and file descriptors. These limits ensure system stability by preventing resource starvation and maintaining fair resource allocation among competing processes.
Red Hat Enterprise Linux provides multiple mechanisms for implementing these limits, including traditional ulimit commands and the more advanced Control Groups (cgroups) framework. Understanding both approaches is essential for effective system administration.
What are Per-Process Resource Limits?
Per-process resource limits are system-enforced boundaries that restrict how much of a particular resource a process can consume. When a process attempts to exceed its limit, the kernel either blocks the request or terminates the process, depending on the resource type.
For example, if a process tries to allocate more memory than its limit allows, the allocation fails and the process receives an out-of-memory error. This prevents runaway processes from crashing the entire system.
Types of Resource Limits
| Resource | ulimit Flag | Description | Default Behavior |
|---|---|---|---|
| CPU Time | -t | Maximum CPU seconds per process | Process killed when exceeded |
| Virtual Memory | -v | Maximum virtual memory size | Memory allocation fails |
| Open Files | -n | Maximum file descriptors | File open operations fail |
| Core Dump Size | -c | Maximum core file size | Core dump truncated or disabled |
| Process Count | -u | Maximum user processes | Process creation fails |
Setting Limits with ulimit
The ulimit command provides a simple way to view and modify resource limits for the current shell session and its child processes.
View all current limits
ulimit -a
Set specific limits (examples)
# Set max open files to 2048 ulimit -n 2048 # Set max virtual memory to 1GB (in KB) ulimit -v 1048576 # Set max CPU time to 300 seconds ulimit -t 300
Red Hat Enterprise Linux: Control Groups (cgroups)
Red Hat Enterprise Linux uses cgroups for advanced resource management. Cgroups provide hierarchical organization of processes with precise resource control and monitoring capabilities.
Setting Up cgroups
Step 1: Install required packages
sudo yum install libcgroup libcgroup-tools
Step 2: Create a new cgroup
# Create cgroup for memory and CPU control sudo cgcreate -g memory,cpu:/myapp
Step 3: Set resource limits
# Limit memory to 512MB sudo cgset -r memory.limit_in_bytes=536870912 myapp # Limit CPU to 50% of one core sudo cgset -r cpu.cfs_quota_us=50000 myapp sudo cgset -r cpu.cfs_period_us=100000 myapp
Step 4: Assign processes to the cgroup
# Add existing process by PID sudo cgclassify -g memory,cpu:myapp 1234 # Run new process in cgroup sudo cgexec -g memory,cpu:myapp /path/to/application
Persistent Configuration
For permanent limits, configure them in /etc/security/limits.conf
# Format: <domain> <type> <item> <value> username soft nofile 2048 username hard nofile 4096 @developers soft nproc 100 * hard cpu 300
Monitoring Resource Usage
Check current resource consumption
# View cgroup statistics cat /sys/fs/cgroup/memory/myapp/memory.usage_in_bytes cat /sys/fs/cgroup/cpu/myapp/cpuacct.usage # Monitor process limits cat /proc/<PID>/limits
Conclusion
Per-process resource limits are crucial for maintaining Linux system stability and performance. Red Hat Enterprise Linux offers both traditional ulimit controls and advanced cgroups management, providing administrators with flexible tools to prevent resource abuse and ensure fair system resource allocation.
