- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What is the difference between Concurrency and Parallel Execution in Computer Architecture?
Concurrency Execution
It is the sensual behavior of the N-Client I-Server model where one client is served at any provided moment. This model has dual characteristics. It is sequential on a small time scale, but together on a rather large time scale.
In this method, the elementary problem is how the competing clients, processors or threads, must be scheduled for service (Execution) through the single-level (processor). Scheduling policies can be oriented toward efficient service in terms of highest throughput (least intervention) or towards short average response time, and so on.
The scheduling policy can be considered as covering two methods as first deals with whether servicing a client can be stopped or not and therefore, on what occasions (pre-emption rule). The other states how one of the competing clients is choosing for services (selection-rule) as displayed in the figure.
If pre-emption is not enabled, a client is serviced for considering required. This gives results in slightly longer waiting times or the blocking of essential service requests from various clients. The pre-emption rule can determine time-sharing, which confines continuous service for each client to the continuation of a time slice, or can be priority-based, interrupting the functions of a client when a higher priority client request services.
The selection rule depends on specific parameters, including priority, time of arrival, etc. This rule defines an algorithm to specify a numeric value, which we will declare the rank, from the given parameter.
Parallel Execution
Parallel Execution is associated with the N-Client N-Server Model. It is carrying more than one server that enables the servicing of more than one client (processes or threads) at an equal time is known as Parallel Execution.
The aim of parallel execution to speed up the computer processing efficiency and raised its throughput, that is, the several processing that can be accomplished during a given interruption of time. The number of hardware increases with parallel execution and with it, the value of the system improves.
Parallel execution is created by assigning the information among the several functional units. For example, the arithmetic, logic, and shift operations can be separated into three units and the operands diverted to each unit under the direction of a control unit.
- Related Articles
- What is Parallel Execution in Computer Architecture?
- What is the difference between Computer Architecture and Computer Organization?
- What is speculative execution in computer architecture?
- What is Guarded execution in computer architecture?
- What is Parallel Decoding in Computer Architecture?
- What is the difference between MUX and DEMUX in computer architecture?
- What is the difference between Encoder and Decoder in Computer architecture?
- What is the difference between Decoder and Demultiplexer in Computer Architecture?
- What is the difference between RISC and CISC in Computer Architecture?
- What is the difference between Latch and Flip-Flops in computer architecture?
- What is the difference between Synchronous Counter and Asynchronous Counter in computer architecture?
- What is the Pipelined execution of Load/Store Instructions in Computer Architecture?
- What is the difference between Linear and Non-Linear pipeline processors in computer architecture?
- What is the Client-Server Framework for Parallel Applications in Computer Architecture?
- What is the difference between Solution Architecture and Cloud Architecture?
