Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Longest Remaining Time First (LRTF) CPU Scheduling Program
Longest Remaining Time First (LRTF) is a preemptive CPU scheduling algorithm that selects the process with the longest remaining burst time to execute next. Unlike Shortest Remaining Time First (SRTF), LRTF prioritizes processes that will take the most time to complete, making it a variant of the Longest Job First (LJF) algorithm.
How LRTF Works
The scheduler maintains a ready queue and continuously selects the process with the maximum remaining execution time. When a new process arrives, the scheduler compares its burst time with the currently executing process and may preempt if the new process has a longer remaining time.
Algorithm Steps
Sort processes by Arrival Time (AT) initially.
At each time unit, find the process with the largest remaining Burst Time (BT) among all arrived processes.
Execute the selected process for one time unit and decrement its remaining BT by 1.
When a process's BT becomes zero, record its Completion Time (CT).
Continue until all processes are completed.
Example LRTF Scheduling
Consider four processes with the following arrival and burst times
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 1 | 2 |
| P2 | 2 | 4 |
| P3 | 3 | 6 |
| P4 | 4 | 8 |
Step-by-Step Execution
| Time | Available Processes | Remaining Times | Selected |
|---|---|---|---|
| 1-2 | P1 | P1=2 | P1 |
| 2-3 | P1, P2 | P1=1, P2=4 | P2 |
| 3-4 | P1, P2, P3 | P1=1, P2=3, P3=6 | P3 |
| 4-12 | P1, P2, P3, P4 | P1=1, P2=3, P3=5, P4=8 | P4 |
| 12-17 | P1, P2, P3 | P1=1, P2=3, P3=5 | P3 |
| 17-20 | P1, P2 | P1=1, P2=3 | P2 |
| 20-21 | P1 | P1=1 | P1 |
Calculating Average Times
| Process | AT | BT | CT | TAT | WT |
|---|---|---|---|---|---|
| P1 | 1 | 2 | 21 | 20 | 18 |
| P2 | 2 | 4 | 20 | 18 | 14 |
| P3 | 3 | 6 | 17 | 14 | 8 |
| P4 | 4 | 8 | 12 | 8 | 0 |
Average Turnaround Time = (20 + 18 + 14 + 8) / 4 = 15.0 units
Average Waiting Time = (18 + 14 + 8 + 0) / 4 = 10.0 units
Implementation
#include <iostream>
#include <algorithm>
using namespace std;
struct Process {
int pid, at, bt, bt_backup, ct, tat, wt;
};
bool compareAT(Process p1, Process p2) {
return p1.at < p2.at;
}
int findLargestBT(Process p[], int n, int currentTime) {
int maxIndex = -1, maxBT = -1;
for (int i = 0; i < n; i++) {
if (p[i].at <= currentTime && p[i].bt > 0) {
if (p[i].bt > maxBT) {
maxBT = p[i].bt;
maxIndex = i;
}
}
}
return maxIndex;
}
void calculateTimes(Process p[], int n) {
sort(p, p + n, compareAT);
int currentTime = p[0].at;
int completed = 0;
while (completed < n) {
int index = findLargestBT(p, n, currentTime);
if (index != -1) {
p[index].bt--;
currentTime++;
if (p[index].bt == 0) {
p[index].ct = currentTime;
p[index].tat = p[index].ct - p[index].at;
p[index].wt = p[index].tat - p[index].bt_backup;
completed++;
}
} else {
currentTime++;
}
}
}
int main() {
Process p[] = {{1, 1, 2}, {2, 2, 4}, {3, 3, 6}, {4, 4, 8}};
int n = 4;
for (int i = 0; i < n; i++) {
p[i].bt_backup = p[i].bt;
}
calculateTimes(p, n);
cout << "Process\tAT\tBT\tCT\tTAT\tWT
";
float totalTAT = 0, totalWT = 0;
for (int i = 0; i < n; i++) {
cout << "P" << p[i].pid << "\t" << p[i].at << "\t" << p[i].bt_backup
<< "\t" << p[i].ct << "\t" << p[i].tat << "\t" << p[i].wt << endl;
totalTAT += p[i].tat;
totalWT += p[i].wt;
}
cout << "\nAverage TAT: " << totalTAT / n << endl;
cout << "Average WT: " << totalWT / n << endl;
return 0;
}
Process AT BT CT TAT WT P1 1 2 21 20 18 P2 2 4 20 18 14 P3 3 6 17 14 8 P4 4 8 12 8 0 Average TAT: 15 Average WT: 10
Characteristics
| Aspect | LRTF |
|---|---|
| Preemption | Yes |
| Starvation | Short processes may starve |
| Overhead | High due to frequent context switching |
| Convoy Effect | Yes, long processes get priority |
Conclusion
LRTF scheduling prioritizes processes with longer remaining execution times, which can lead to poor performance for shorter processes. While it ensures longer jobs get CPU time, it typically results in high average waiting times and is not practical for real-world systems where responsiveness is important.
