Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Types of Parallelism in Processing Execution
Parallelism in processing execution refers to the simultaneous execution of multiple tasks or operations to improve computational performance. There are four main types of parallelism, each operating at different levels of the computing system.
Data Parallelism
Data Parallelism involves concurrent execution of the same task on multiple computing cores, but with different portions of data. Each core performs identical operations on separate data segments.
Consider summing an array of size N. For a single-core system, one thread sums elements [0] ... [N?1]. For a dual-core system, thread A on core 0 sums elements [0] ... [N/2?1], while thread B on core 1 sums elements [N/2] ... [N?1]. Both threads execute the same summing operation but on different data subsets.
Task Parallelism
Task Parallelism involves concurrent execution of different tasks on multiple computing cores. Each core performs a unique operation, often on the same data set.
Using the same array example, task parallelism might involve two threads performing different statistical operations ? thread A calculates the mean while thread B calculates the standard deviation. Both threads operate on the same array but execute completely different tasks simultaneously.
Bit-Level Parallelism
Bit-level parallelism increases performance by expanding the processor word size, reducing the number of instructions needed for operations on larger data types.
Consider adding two 16-bit integers on different processors:
| Processor | Word Size | Instructions Required | Process |
|---|---|---|---|
| 8-bit | 8 bits | 2 | Add lower 8 bits, then upper 8 bits |
| 16-bit | 16 bits | 1 | Add entire 16-bit integers directly |
The 16-bit processor completes the operation in half the time by processing more bits simultaneously.
Instruction-Level Parallelism
Instruction-level parallelism (ILP) enables simultaneous execution of multiple instructions from a program. Modern processors use techniques like pipelining and superscalar execution to achieve ILP.
Example
for (i=1; i<=100; i= i+1)
y[i] = y[i] + x[i];
This parallel loop allows every iteration to overlap with others since each iteration operates on independent array elements. Within each iteration, there is limited overlap opportunity, but multiple iterations can execute simultaneously on different functional units.
Comparison
| Type | Level | Execution | Best For |
|---|---|---|---|
| Data Parallelism | Application | Same task, different data | Large datasets, matrix operations |
| Task Parallelism | Application | Different tasks, same/different data | Independent operations, pipelines |
| Bit-Level | Hardware | Wider data paths | Arithmetic operations |
| Instruction-Level | Hardware | Multiple instructions simultaneously | Sequential programs with dependencies |
Conclusion
These four types of parallelism operate at different system levels to improve computational performance. Data and task parallelism are typically exploited at the application level, while bit-level and instruction-level parallelism are implemented in hardware. Understanding these concepts helps in designing efficient parallel algorithms and selecting appropriate hardware architectures.
