
- Operating System Tutorial
- OS - Home
- OS - Overview
- OS - Components
- OS - Types
- OS - Services
- OS - Properties
- OS - Processes
- OS - Process Scheduling
- OS - Scheduling algorithms
- OS - Multi-threading
- OS - Memory Management
- OS - Virtual Memory
- OS - I/O Hardware
- OS - I/O Software
- OS - File System
- OS - Security
- OS - Linux
- OS - Exams Questions with Answers
- OS - Exams Questions with Answers
- Operating System Useful Resources
- OS - Quick Guide
- OS - Useful Resources
- OS - Discussion
Buffering in Operating System
What is Buffering in OS?
In operating systems, buffering is a technique which is used to enhance the performance of I/O operations of the system. Basically, buffering in operating system is a method of storing data in a buffer or cache temporarily, this buffered data then can be accessed more quickly as compared to the original source of the data.
In a computer system, data is stored on several devices like hard discs, magnetic tapes, optical discs and network devices. In the case, when a process requires to read or write data from one of these storage devices, it has to wait while the device retrieves or stores the data. This waiting time could be very high, especially for those devices which are slow or have a high latency.
This problem can be addressed by buffering. Buffering provides a temporary storage area, called buffer. Buffer can store data before it is sent to or retrieved from the storage device. When the buffer is fully occupied, then data is sent to the storage device in a batch, this will reduce the number of access operations required and hence improves the performance of the system.
Reasons of Buffering
The following three are the major reasons for buffering in operating systems −
Buffering creates a synchronization between two devices having different processing speed. For example, if a hard disc (supplier of data) has high speed and a printer (accepter of data) has low speed, then buffering is required.
Buffering is also required in cases where two devices have different data block sizes.
Buffering is also required to support copy semantics for application I/O operations.
Types of Buffering
In operating systems, the following three types of buffering techniques are there −
Single Buffering
Double Buffering
Circular Buffering
Let us discuss each type of buffering technique in detail.
Single Buffering
It is the simplest buffering that operating system can support. In the case of single buffering, when a process issues an I/O request, the operating system assigns a buffer (or cache) in the system potion of the main memory to the operation. Then, the input transfers are made to the buffer and are moved to the user space when needed.
Double Buffering
Double buffering is an extended variant of single buffering. In this type buffering, a process can transfer data to or from one buffer while the operating system removes or fills the other. Therefore, double buffering has two system buffers instead of one.
Circular Buffering
When more than two buffers are used, then it is called circular buffering. It is used to solve the issues associated with the double buffering technique. Sometimes, the double buffering becomes insufficient, when the process performs rapid bursts of I/O. In the circular buffer, each individual buffer acts a unit.
Advantages of Buffering
Some of important advantages of buffering are listed below −
Buffering reduces the number of I/O operations required to access data.
Buffering reduces the amount of time for that processes have to wait for the data.
Buffering improves the performance of I/O operations as it allows data to be read or written in large blocks instead of 1 byte or 1 character at a time.
Buffering can improve the overall performance of the system by reducing the number of system calls and context switches required for I/O operations.
Limitations of Buffering
Buffering also has some limitations -
Buffers of large sizes consume significant amount of memory that can degrade the system performance.
Buffering may cause a delay between the time data is read or written and the time it is processed by the application.
Buffering may also impact the real-time system performance and hence, can cause synchronization issues.
Conclusion
To conclude, buffering in operating systems is a technique that is used to avoid overheads and inefficiencies in the system.
- Related Articles
- What is buffering and spooling in a batch processing operating system?
- File system Implementation in Operating System
- Difference Between Network Operating System and Distributed Operating System
- Semaphores in Operating System
- Concurrency in Operating System
- Kernel in Operating System
- Livelock in Operating System
- What are system calls in Operating System?
- Operating System Structure
- Operating System Operations
- Layered Operating System
- Hybrid Operating System
- Operating System Generations
- Operating System Debugging
- Operating System Definition
