
- Operating System Tutorial
- OS - Home
- OS - Overview
- OS - Components
- OS - Types
- OS - Services
- OS - Properties
- OS - Processes
- OS - Process Scheduling
- OS - Scheduling algorithms
- OS - Multi-threading
- OS - Memory Management
- OS - Virtual Memory
- OS - I/O Hardware
- OS - I/O Software
- OS - File System
- OS - Security
- OS - Linux
- OS - Exams Questions with Answers
- OS - Exams Questions with Answers
- Operating System Useful Resources
- OS - Quick Guide
- OS - Useful Resources
- OS - Discussion
Data parallelism vs Task parallelism
Data Parallelism
Data Parallelism means concurrent execution of the same task on each multiple computing core.
Let’s take an example, summing the contents of an array of size N. For a single-core system, one thread would simply sum the elements [0] . . . [N − 1]. For a dual-core system, however, thread A, running on core 0, could sum the elements [0] . . . [N/2 − 1] and while thread B, running on core 1, could sum the elements [N/2] . . . [N − 1]. So the Two threads would be running in parallel on separate computing cores.
Task Parallelism
Task Parallelism means concurrent execution of the different task on multiple computing cores.
Consider again our example above, an example of task parallelism might involve two threads, each performing a unique statistical operation on the array of elements. Again The threads are operating in parallel on separate computing cores, but each is performing a unique operation.
The key differences between Data Parallelisms and Task Parallelisms are −
Data Parallelisms | Task Parallelisms |
---|---|
1. Same task are performed on different subsets of same data. | 1. Different task are performed on the same or different data. |
2. Synchronous computation is performed. | 2. Asynchronous computation is performed. |
3. As there is only one execution thread operating on all sets of data, so the speedup is more. | 3. As each processor will execute a different thread or process on the same or different set of data, so speedup is less. |
4. Amount of parallelization is proportional to the input size. | 4. Amount of parallelization is proportional to the number of independent tasks is performed. |
5. It is designed for optimum load balance on multiprocessor system. | 5. Here, load balancing depends upon on the e availability of the hardware and scheduling algorithms like static and dynamic scheduling. |
- Related Articles
- Parallelism
- Thread-based parallelism in Python
- Thread-based parallelism in C#
- Difference between Concurrency and Parallelism
- Types of Parallelism in Processing Execution
- What are the different levels of Parallelism?
- What are the types of Parallelism in Computer Architecture?
- What are the conditions of Parallelism in Computer Architecture?
- What are different methods of parallelism in Parallel Computer Architecture?
- What is the task of Data Mining?
- jQuery Data vs Attr?
- How to start the specific task of the task scheduler using PowerShell?
- How to create a Scheduled task with a task scheduler using PowerShell?
- Task Scheduler n C++
- Rooted vs Unrooted Trees in Data Structure
