- Parallel Algorithm Tutorial
- Parallel Algorithm Home
- Parallel Algorithm Introduction
- Parallel Algorithm Analysis
- Parallel Algorithm Models
- Parallel Random Access Machines
- Parallel Algorithm Structure
- Design Techniques
- Matrix Multiplication
- Parallel Algorithm - Sorting
- Parallel Search Algorithm
- Graph Algorithm

- Parallel Algorithm Useful Resources
- Parallel Algorithm - Quick Guide
- Parallel Algorithm - Useful Resources
- Parallel Algorithm - Discussion

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

Analysis of an algorithm helps us determine whether the algorithm is useful or not. Generally, an algorithm is analyzed based on its execution time **(Time Complexity)** and the amount of space **(Space Complexity)** it requires.

Since we have sophisticated memory devices available at reasonable cost, storage space is no longer an issue. Hence, space complexity is not given so much of importance.

Parallel algorithms are designed to improve the computation speed of a computer. For analyzing a Parallel Algorithm, we normally consider the following parameters −

- Time complexity (Execution Time),
- Total number of processors used, and
- Total cost.

The main reason behind developing parallel algorithms was to reduce the computation time of an algorithm. Thus, evaluating the execution time of an algorithm is extremely important in analyzing its efficiency.

Execution time is measured on the basis of the time taken by the algorithm to solve a problem. The total execution time is calculated from the moment when the algorithm starts executing to the moment it stops. If all the processors do not start or end execution at the same time, then the total execution time of the algorithm is the moment when the first processor started its execution to the moment when the last processor stops its execution.

Time complexity of an algorithm can be classified into three categories−

**Worst-case complexity**− When the amount of time required by an algorithm for a given input is**maximum**.**Average-case complexity**− When the amount of time required by an algorithm for a given input is**average**.**Best-case complexity**− When the amount of time required by an algorithm for a given input is**minimum**.

The complexity or efficiency of an algorithm is the number of steps executed by the algorithm to get the desired output. Asymptotic analysis is done to calculate the complexity of an algorithm in its theoretical analysis. In asymptotic analysis, a large length of input is used to calculate the complexity function of the algorithm.

**Note** − **Asymptotic** is a condition where a line tends to meet a curve, but they do not intersect. Here the line and the curve is asymptotic to each other.

Asymptotic notation is the easiest way to describe the fastest and slowest possible execution time for an algorithm using high bounds and low bounds on speed. For this, we use the following notations −

- Big O notation
- Omega notation
- Theta notation

In mathematics, Big O notation is used to represent the asymptotic characteristics of functions. It represents the behavior of a function for large inputs in a simple and accurate method. It is a method of representing the upper bound of an algorithm’s execution time. It represents the longest amount of time that the algorithm could take to complete its execution. The function −

f(n) = O(g(n))

iff there exists positive constants **c** and **n _{0}** such that

Omega notation is a method of representing the lower bound of an algorithm’s execution time. The function −

f(n) = Ω (g(n))

iff there exists positive constants **c** and **n _{0}** such that

Theta notation is a method of representing both the lower bound and the upper bound of an algorithm’s execution time. The function −

f(n) = θ(g(n))

iff there exists positive constants **c _{1}, c_{2},** and

The performance of a parallel algorithm is determined by calculating its **speedup**. Speedup is defined as the ratio of the worst-case execution time of the fastest known sequential algorithm for a particular problem to the worst-case execution time of the parallel algorithm.

speedup =

Worst case execution time of the fastest known sequential for a particular problem
Worst case execution time of the parallel algorithm

The number of processors used is an important factor in analyzing the efficiency of a parallel algorithm. The cost to buy, maintain, and run the computers are calculated. Larger the number of processors used by an algorithm to solve a problem, more costly becomes the obtained result.

Total cost of a parallel algorithm is the product of time complexity and the number of processors used in that particular algorithm.

Total Cost = Time complexity × Number of processors used

Therefore, the **efficiency** of a parallel algorithm is −

Efficiency =

Worst case execution time of sequential algorithm
Worst case execution time of the parallel algorithm

Advertisements