- OS - Home
- OS - Needs
- OS - Overview
- OS - History
- OS - Components
- OS - Structure
- OS - Architecture
- OS - Services
- OS - Properties
- Process Management
- Processes in Operating System
- States of a Process
- Process Schedulers
- Process Control Block
- Operations on Processes
- Inter Process Communication (IPC)
- Context Switching
- Multi-threading
- Scheduling Algorithms
- Process Scheduling
- Types of Scheduling
- Scheduling Algorithms Overview
- FCFS Scheduling Algorithm
- SJF Scheduling Algorithm
- Round Robin Scheduling Algorithm
- HRRN Scheduling Algorithm
- Priority Scheduling Algorithm
- Multilevel Queue Scheduling
- Lottery Scheduling Algorithm
- Starvation and Aging
- Turn Around Time & Waiting Time
- Burst Time in SJF Scheduling
- Process Synchronization
- Process Synchronization
- Solutions For Process Synchronization
- Hardware-Based Solution
- Software-Based Solution
- Critical Section Problem
- Critical Section Synchronization
- Mutual Exclusion Synchronization
- Peterson's Algorithm
- Dekker's Algorithm
- Bakery Algorithm
- Semaphores
- Binary Semaphores
- Counting Semaphores
- Mutex
- Turn Variable
- Bounded Buffer Problem
- Reader Writer Locks
- Test and Set Lock
- Monitors
- Sleep and Wake
- Race Condition
- OS Deadlock
- Introduction to Deadlock
- Conditions for Deadlock
- Deadlock Handling
- Deadlock Prevention
- Deadlock Avoidance (Banker's Algorithm)
- Deadlock Detection and Recovery
- Deadlock Ignorance
- Memory Management
- Memory Management
- Contiguous Memory Allocation
- Non-Contiguous Memory Allocation
- First Fit Algorithm
- Next Fit Algorithm
- Best Fit Algorithm
- Worst Fit Algorithm
- Fragmentation
- Virtual Memory
- Segmentation
- Buddy System
- Slab Allocation
- Overlays
- Paging and Page Replacement
- Paging
- Demand Paging
- Page Table
- Page Replacement Algorithms
- Optimal Page Replacement Algorithm
- Belady's Anomaly
- Thrashing
- Storage and File Management
- File Systems
- File Attributes
- Structures of Directory
- Linked Index Allocation
- Indexed Allocation
- Disk Scheduling Algorithms
- FCFS Disk Scheduling
- SSTF Disk Scheduling
- SCAN Disk Scheduling
- LOOK Disk Scheduling
- I/O Systems
- I/O Hardware
- I/O Software
- OS Types
- OS - Types
- OS - Batch Processing
- OS - Multiprocessing
- OS - Hybrid
- OS - Monolithic
- OS - Zephyr
- OS - Nix
- OS - Linux
- OS - Blackberry
- OS - Garuda
- OS - Tails
- OS - Clustered
- OS - Haiku
- OS - AIX
- OS - Solus
- OS - Tizen
- OS - Bharat
- OS - Fire
- OS - Bliss
- OS - VxWorks
- OS - Embedded
- OS - Single User
- Miscellaneous Topics
- OS - Security
- OS Questions Answers
- OS - Questions Answers
- OS Useful Resources
- OS - Quick Guide
- OS - Useful Resources
- OS - Discussion
Operating System - Semaphores
In operating systems, semaphores are used to ensure proper process synchronization and to avoid race conditions when multiple processes access shared resources concurrently. Read this chapter to understand the concept of semaphores, their types, operations, and how they are implemented in operating systems.
- Semaphores for Process Synchronization
- Types of Semaphores
- Working of Semaphores
- Implementation of Semaphore
- Advantages of Semaphores
- Disadvantages of Semaphores
Semaphores For Process Synchronization
Semaphore is a variable (commonly an integer type) that is used to control access to a common resource by multiple processes in a concurrent system. By controlling access to shared resources, semaphores prevent critical section issues and ensure process synchronization in multiprocessing systems.
There are two atomic operations defined for semaphores: wait(S), which decrements semaphore S, if S is positive, and signal(S), which increments S, allowing process synchronization. The section below explains these operations in detail.
Wait Operation
The wait operation decrements its argument, S, if it is positive. If S is negative or zero, no operation is performed. This operation checks the semaphore's value. If the value is greater than 0, the process continues and S is decremented by 1. If the value is 0, the process is blocked(waits) until S becomes positive.
wait(S) {
while (S <= 0);
S--;
}
Signal Operation
The signal operation increments its argument, S. After a process finishes using the shared resource, it performs the signal operation, which increases the semaphore's value by 1, potentially unblocking other waiting processes and allowing them to access the resource.
signal(S) {
S++;
}
Types of Semaphores
There are two main types of semaphores: counting semaphores and binary semaphores. Details about these are as follows −
- Binary Semaphores
- Counting Semaphores
1. Binary Semaphores
Binary semaphores are the semaphores whose value is restricted to 0 and 1. The wait operation only works when the semaphore is 0. Implementing binary semaphores is sometimes easier than counting semaphores.
Binary semaphores are implemented in the system where single instances of resource are available. For example, if there is only one printer in the system, a binary semaphore can be used to control access to the printer.
2. Counting Semaphores
Counting semaphores are integer-value semaphores have an unrestricted value domain. They are used to coordinate resource access, with the semaphore count representing the number of available resources. If resources are added, the semaphore count is incremented automatically; if resources are removed, the count is decremented.
The counting semaphore is used when multiple instances of a resource are available. For example, if there are 5 identical resources, the semaphore S is initialized to 5. Each time a process acquires a resource, the S is decremented by 1 and when it is released the S is incremented by 1. When S=0, no resources are available, and processes requesting resources are blocked until a resource is released.
Working of Semaphores
To understand working of semaphores, take a situation between two processes, P1 and P2, that need to access a shared resource. Initially, the semaphore S is set to 1, this indicates that the resource is available.
State 1 − Both P1 and P2 are in their non-critical sections.
State 2 − P1 wants to enter its critical section, so it performs wait(S). So S is decremented to 0.
State 3 − P1 is in critical section, now P2 also wants to enter its critical section, so it performs wait(S). Since S is 0, P2 is blocked and waits.
State 4 − P1 finishes its critical section and performs signal(S). So S is incremented to 1.
State 5 − P2 is unblocked and performs wait(S). So S is decremented to 0, and P2 enters its critical section.
State 6 − P2 finishes its critical section and performs signal(S). So S is incremented to 1.
The following diagram illustrates the above steps −
Implementation of Semaphore
Here is an example of how to implement a binary semaphore in C++ / Python / Java using POSIX threads.
import threading
import time
semaphore = threading.Semaphore(1)
def process(id):
print(f"Process {id} is trying to enter critical section")
semaphore.acquire() # wait operation
print(f"Process {id} has entered critical section")
time.sleep(1) # Simulate critical section
print(f"Process {id} is leaving critical section")
semaphore.release() # signal operation
threads = []
for i in range(5):
t = threading.Thread(target=process, args=(i,))
threads.append(t)
t.start()
for t in threads:
t.join()
The output of the above code will be −
Process 0 is trying to enter critical section Process 0 has entered critical section Process 1 is trying to enter critical section Process 2 is trying to enter critical section Process 3 is trying to enter critical section Process 4 is trying to enter critical section ...
#include <iostream>
#include <semaphore.h>
#include <thread>
#include <vector>
using namespace std;
sem_t semaphore;
void process(int id) {
cout << "Process " << id << " is trying to enter critical section" << endl;
sem_wait(&semaphore); // wait operation
cout << "Process " << id << " has entered critical section" << endl;
this_thread::sleep_for(chrono::seconds(1)); // Simulate critical section
cout << "Process " << id << " is leaving critical section" << endl;
sem_post(&semaphore); // signal operation
}
int main() {
sem_init(&semaphore, 0, 1); // initialize binary semaphore
vector<thread> threads;
for (int i = 0; i < 5; i++) {
threads.push_back(thread(process, i));
}
for (auto& t : threads) {
t.join();
}
sem_destroy(&semaphore);
return 0;
}
The output of the above code will be −
Process 0 is trying to enter critical section Process 0 has entered critical section Process 1 is trying to enter critical section Process 2 is trying to enter critical section Process 3 is trying to enter critical section Process 4 is trying to enter critical section ...
import java.util.concurrent.Semaphore;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class SemaphoreExample {
static Semaphore semaphore = new Semaphore(1);
public static void process(int id) {
try {
System.out.println("Process " + id + " is trying to enter critical section");
semaphore.acquire(); // wait operation
System.out.println("Process " + id + " has entered critical section");
Thread.sleep(1000); // Simulate critical section
System.out.println("Process " + id + " is leaving critical section");
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
semaphore.release(); // signal operation
}
}
public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(5);
for (int i = 0; i < 5; i++) {
final int id = i;
executor.submit(() -> process(id));
}
executor.shutdown();
}
}
The output of the above code will be −
Process 0 is trying to enter critical section Process 0 has entered critical section Process 1 is trying to enter critical section Process 2 is trying to enter critical section Process 3 is trying to enter critical section Process 4 is trying to enter critical section ...
Advantages of Semaphores
Some of the advantages of semaphores are as follows −
- Semaphores strictly follow the mutual exclusion principle, allowing only one process into the critical section at a time. They are much more efficient than some other synchronization methods.
- Semaphores eliminate resource wastage caused by busy waiting, as processor time is not needlessly spent checking if a condition is met for process to access the critical section.
- Semaphores are implemented in the machine independent code of the microkernel, making them machine-independent.
- Due to the busy queue within the semaphore, there is no consumption of processing time and resources. This is because operations are allowed to enter the critical section only after satisfying a certain condition.
- Users allow for the flexible management of resources.
- They do not allow more than one operation to enter the critical section. Mutex is implemented, and these are significantly more efficient than other synchronization methods.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
- Semaphores are complex, so the wait and signal operations must be implemented in the correct order to prevent deadlocks.
- Semaphores are impractical for large scale use as they lead to loss of molecularity. This occurs because the wait and signal operations specify the creation of a structured system layout.
- Semaphores may cause priority inversion, where low-priority processes access the critical section before high-priority processes.
Conclusion
Semaphores are synchronization mechanisms used in operating systems to control access to shared resources by multiple processes. We discussed two types of semaphores binary and counting semaphores. We also explained the working of semaphores with an example and provided code snippets in C++, Python, and Java to demonstrate their implementation.