- OS - Home
- OS - Needs
- OS - Overview
- OS - History
- OS - Components
- OS - Structure
- OS - Architecture
- OS - Services
- OS - Properties
- Process Management
- Processes in Operating System
- States of a Process
- Process Schedulers
- Process Control Block
- Operations on Processes
- Inter Process Communication (IPC)
- Context Switching
- Multi-threading
- Scheduling Algorithms
- Process Scheduling
- Types of Scheduling
- Scheduling Algorithms Overview
- FCFS Scheduling Algorithm
- SJF Scheduling Algorithm
- Round Robin Scheduling Algorithm
- HRRN Scheduling Algorithm
- Priority Scheduling Algorithm
- Multilevel Queue Scheduling
- Lottery Scheduling Algorithm
- Starvation and Aging
- Turn Around Time & Waiting Time
- Burst Time in SJF Scheduling
- Process Synchronization
- Process Synchronization
- Solutions For Process Synchronization
- Hardware-Based Solution
- Software-Based Solution
- Critical Section Problem
- Critical Section Synchronization
- Mutual Exclusion Synchronization
- Peterson's Algorithm
- Dekker's Algorithm
- Bakery Algorithm
- Semaphores
- Binary Semaphores
- Counting Semaphores
- Mutex
- Turn Variable
- Bounded Buffer Problem
- Reader Writer Locks
- Test and Set Lock
- Monitors
- Sleep and Wake
- Race Condition
- OS Deadlock
- Introduction to Deadlock
- Conditions for Deadlock
- Deadlock Handling
- Deadlock Prevention
- Deadlock Avoidance (Banker's Algorithm)
- Deadlock Detection and Recovery
- Deadlock Ignorance
- Memory Management
- Memory Management
- Contiguous Memory Allocation
- Non-Contiguous Memory Allocation
- First Fit Algorithm
- Next Fit Algorithm
- Best Fit Algorithm
- Worst Fit Algorithm
- Fragmentation
- Virtual Memory
- Segmentation
- Buddy System
- Slab Allocation
- Overlays
- Paging and Page Replacement
- Paging
- Demand Paging
- Page Table
- Page Replacement Algorithms
- Optimal Page Replacement Algorithm
- Belady's Anomaly
- Thrashing
- Storage and File Management
- File Systems
- File Attributes
- Structures of Directory
- Linked Index Allocation
- Indexed Allocation
- Disk Scheduling Algorithms
- FCFS Disk Scheduling
- SSTF Disk Scheduling
- SCAN Disk Scheduling
- LOOK Disk Scheduling
- I/O Systems
- I/O Hardware
- I/O Software
- OS Types
- OS - Types
- OS - Batch Processing
- OS - Multiprocessing
- OS - Hybrid
- OS - Monolithic
- OS - Zephyr
- OS - Nix
- OS - Linux
- OS - Blackberry
- OS - Garuda
- OS - Tails
- OS - Clustered
- OS - Haiku
- OS - AIX
- OS - Solus
- OS - Tizen
- OS - Bharat
- OS - Fire
- OS - Bliss
- OS - VxWorks
- OS - Embedded
- OS - Single User
- Miscellaneous Topics
- OS - Security
- OS Questions Answers
- OS - Questions Answers
- OS Useful Resources
- OS - Quick Guide
- OS - Useful Resources
- OS - Discussion
Operating System - Dekker's Algorithm for Process Synchronization
Dekker's algorithm is the first solution of critical section problem and process synchronization for two processes. It was proposed by Th. J. Dekker in 1965. A solution to critical section problem must ensure the following three conditions, Mutual Exclusion, Progress and Bounded Waiting. There are many versions of this algorithms, the 5th or final version satisfies the all the conditions and is the most efficient among all of them.
- First version of Dekker's Algorithm
- Second version of Dekker's Algorithm
- Third version of Dekker's Algorithm
- Fourth version of Dekker's Algorithm
- Fifth version of Dekker's Algorithm
First version of Dekker's Algorithm
In the first version of Dekker's algorithm, a single variable threadno is used to indicate which thread is allowed to enter its critical section. if threadno is 1, then thread 1 is allowed to enter its critical section, and if threadno is 2, then thread 2 is allowed to enter its critical section.
Example
The code below shows the implementation of first version of Dekker's algorithm −
#include <iostream>
#include <thread>
#include <atomic>
#include <chrono>
std::atomic<int> thread_no{1};
const int ITERATIONS = 10;
void thread1() {
for (int i = 0; i < ITERATIONS; ++i) {
// entry section: wait until thread_no == 1
while (thread_no.load() != 1) {
std::this_thread::yield();
}
// critical section
std::cout << "Thread 1 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 1 leaving critical section (iteration " << i << ")" << std::endl;
// exit section: give access to the other thread
thread_no.store(2);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
void thread2() {
for (int i = 0; i < ITERATIONS; ++i) {
// entry section: wait until thread_no == 2
while (thread_no.load() != 2) {
std::this_thread::yield();
}
// critical section
std::cout << "Thread 2 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 2 leaving critical section (iteration " << i << ")" << std::endl;
// exit section: give access to the other thread
thread_no.store(1);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
int main() {
std::thread t1(thread1);
std::thread t2(thread2);
t1.join();
t2.join();
std::cout << "Both threads completed." << std::endl;
return 0;
}
The output of the above code will be −
Thread 1 entering critical section (iteration 0) Thread 1 leaving critical section (iteration 0) Thread 2 entering critical section (iteration 0) Thread 2 leaving critical section (iteration 0) Thread 1 entering critical section (iteration 1) .... Both threads completed.
Drawbacks of First Version
The problem of this first version of Dekker's algorithm is implementation of lockstep synchronization. It means each thread depends on other to complete its execution. If one of the two processes completes its execution, then the second process runs. Then it gives access to the completed one and waits for its run. But the completed one would never run and so it would never return access back to the second process. Thus the second process waits for infinite time.
Second Version of Dekker's Algorithm
In Second version of Dekker's algorithm, lockstep synchronization is removed. It is done by using two flags to indicate its current status and updates them accordingly at the entry and exit section.
Example
The code below shows the implementation of second version of Dekker's algorithm −
#include <iostream>
#include <thread>
#include <atomic>
#include <chrono>
std::atomic<bool> th1{false};
std::atomic<bool> th2{false};
const int ITERATIONS = 10;
void thread1() {
for (int i = 0; i < ITERATIONS; ++i) {
// entry section: wait until th2 is not in its critical section
while (th2.load()) {
std::this_thread::yield();
}
// indicate thread1 entering its critical section
th1.store(true);
// critical section
std::cout << "Thread 1 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 1 leaving critical section (iteration " << i << ")" << std::endl;
// exit section: indicate th1 exiting its critical section
th1.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
void thread2() {
for (int i = 0; i < ITERATIONS; ++i) {
// entry section: wait until th1 is not in its critical section
while (th1.load()) {
std::this_thread::yield();
}
// indicate thread2 entering its critical section
th2.store(true);
// critical section
std::cout << "Thread 2 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 2 leaving critical section (iteration " << i << ")" << std::endl;
// exit section: indicate th2 exiting its critical section
th2.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
int main() {
std::thread t1(thread1);
std::thread t2(thread2);
t1.join();
t2.join();
std::cout << "Both threads completed." << std::endl;
return 0;
}
The output of the above code will be −
Thread 1 entering critical section (iteration 0) Thread 1 leaving critical section (iteration 0) Thread 2 entering critical section (iteration 0) Thread 2 leaving critical section (iteration 0) Thread 1 entering critical section (iteration 1) .... Both threads completed.
Drawbacks of Second Version
Mutual exclusion is violated in this version. During flag update, if threads are preempted then both the threads enter into the critical section. Once the preempted thread is restarted, also the same can be observed at the start itself, when both the flags are false.
Third Version of Dekker's Algorithm
In this version, critical section flag is set before entering critical section test to ensure mutual exclusion. But there is a possibility of deadlock in this version.
Example
The code below shows the implementation of third version of Dekker's algorithm −
#include <iostream>
#include <thread>
#include <atomic>
#include <chrono>
std::atomic<bool> th1{false};
std::atomic<bool> th2{false};
const int ITERATIONS = 10;
// Third version: set flag before checking the other.
// This can ensure mutual exclusion in some cases but
// may lead to deadlock if both threads set their flags simultaneously.
void thread1() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter critical section
th1.store(true);
// wait while thread 2 wants to enter (possible deadlock if both set)
while (th2.load()) {
std::this_thread::yield();
}
// critical section
std::cout << "Thread 1 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 1 leaving critical section (iteration " << i << ")" << std::endl;
// exit section
th1.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
void thread2() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter critical section
th2.store(true);
// wait while thread 1 wants to enter (possible deadlock if both set)
while (th1.load()) {
std::this_thread::yield();
}
// critical section
std::cout << "Thread 2 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 2 leaving critical section (iteration " << i << ")" << std::endl;
// exit section
th2.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
int main() {
std::thread t1(thread1);
std::thread t2(thread2);
t1.join();
t2.join();
std::cout << "Both threads completed." << std::endl;
return 0;
}
The output of the above code will be −
Thread 1 entering critical section (iteration 0) Thread 1 leaving critical section (iteration 0) Thread 2 entering critical section (iteration 0) Thread 2 leaving critical section (iteration 0) Thread 1 entering critical section (iteration 1) .... Both threads completed.
Drawbacks of Third Version
This version failed to solve the problem of mutual exclusion. It also introduces deadlock possibility, both threads could get flag simultaneously and they will wait for infinite time.
Fourth Version of Dekker's Algorithm
In this version of Dekker's algorithm, it sets flag to false for small period of time to provide control and solves the problem of mutual exclusion and deadlock.
Example
The code below shows the implementation of fourth version of Dekker's algorithm −
#include <iostream>
#include <thread>
#include <atomic>
#include <chrono>
#include <random>
std::atomic<bool> th1{false};
std::atomic<bool> th2{false};
const int ITERATIONS = 10;
// random short pause to break lockstep and avoid deadlock
static std::mt19937 rng((std::random_device())());
static std::uniform_int_distribution<int> dist(1, 50);
inline void pause_random() {
std::this_thread::sleep_for(std::chrono::milliseconds(dist(rng)));
}
void thread1() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter
th1.store(true);
// if other wants to enter, briefly withdraw and retry (randomized backoff)
while (th2.load()) {
th1.store(false);
pause_random(); // small random wait
th1.store(true); // retry
}
// critical section
std::cout << "Thread 1 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 1 leaving critical section (iteration " << i << ")" << std::endl;
// exit section
th1.store(false);
// remainder
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
void thread2() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter
th2.store(true);
// if other wants to enter, briefly withdraw and retry (randomized backoff)
while (th1.load()) {
th2.store(false);
pause_random(); // small random wait
th2.store(true); // retry
}
// critical section
std::cout << "Thread 2 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 2 leaving critical section (iteration " << i << ")" << std::endl;
// exit section
th2.store(false);
// remainder
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
int main() {
std::thread t1(thread1);
std::thread t2(thread2);
t1.join();
t2.join();
std::cout << "Both threads completed." << std::endl;
return 0;
}
The output of the above code will be −
Thread 1 entering critical section (iteration 0) Thread 1 leaving critical section (iteration 0) Thread 2 entering critical section (iteration 0) Thread 2 leaving critical section (iteration 0) Thread 1 entering critical section (iteration 1) .... Both threads completed.
Drawbacks of Fourth Version
Indefinite postponement is the problem of this version. Random amount of time is unpredictable depending upon the situation in which the algorithm is being implemented, hence it is not acceptable in case of business critical systems.
Fifth Version of Dekker's Algorithm
In this version, flavored thread motion is used to determine entry to critical section. It provides mutual exclusion and avoiding deadlock, indefinite postponement or lockstep synchronization by resolving the conflict that which thread should execute first. This version of Dekker's algorithm provides the complete solution of critical section problems.
Example
The code below shows the implementation of fifth version of Dekker's algorithm −
#include <iostream>
#include <thread>
#include <atomic>
#include <chrono>
std::atomic<bool> want1{false};
std::atomic<bool> want2{false};
std::atomic<int> favoured{1};
const int ITERATIONS = 10;
void thread1() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter
want1.store(true);
// entry section: wait while thread 2 wants to enter
while (want2.load()) {
if (favoured.load() == 2) {
// yield to thread 2 and wait until favoured changes
want1.store(false);
while (favoured.load() == 2) {
std::this_thread::yield();
}
want1.store(true);
}
}
// critical section
std::cout << "Thread 1 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 1 leaving critical section (iteration " << i << ")" << std::endl;
// favour the other thread and exit
favoured.store(2);
want1.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
void thread2() {
for (int i = 0; i < ITERATIONS; ++i) {
// indicate intent to enter
want2.store(true);
// entry section: wait while thread 1 wants to enter
while (want1.load()) {
if (favoured.load() == 1) {
// yield to thread 1 and wait until favoured changes
want2.store(false);
while (favoured.load() == 1) {
std::this_thread::yield();
}
want2.store(true);
}
}
// critical section
std::cout << "Thread 2 entering critical section (iteration " << i << ")" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Thread 2 leaving critical section (iteration " << i << ")" << std::endl;
// favour the other thread and exit
favoured.store(1);
want2.store(false);
// remainder section
std::this_thread::sleep_for(std::chrono::milliseconds(50));
}
}
int main() {
std::thread t1(thread1);
std::thread t2(thread2);
t1.join();
t2.join();
std::cout << "Both threads completed." << std::endl;
return 0;
}
The output of the above code will be −
Thread 1 entering critical section (iteration 0) Thread 1 leaving critical section (iteration 0) Thread 2 entering critical section (iteration 0) Thread 2 leaving critical section (iteration 0) Thread 1 entering critical section (iteration 1) .... Both threads completed.
Advantages of Fifth Version
- Mutual Exclusion − Only one thread can be in its critical section at a time.
- Progress − If no thread is in its critical section, and one or more threads want to enter their critical sections, then the selection of the thread that will enter the critical section next cannot be postponed indefinitely.
- Bounded Waiting − There exists a limit on the number of times that other threads are allowed to enter their critical sections after a thread has made a request to enter its critical section and before that request is granted.
Conclusion
Dekker's algorithm is a classic solution to the critical section problem for two processes. The fifth version of Dekker's algorithm successfully satisfies all three conditions of the critical section problem: mutual exclusion, progress, and bounded waiting. It fixes the conflicts that arise in the previous versions.