What are the different levels of Parallelism?

Computer ArchitectureComputer ScienceNetwork

There are different level of parallelism which are as follows −

  • Instruction Level − At instruction level, a grain is consist of less than 20 instruction called fine grain. Fine-grain parallelism at this level may range from two thousand depending on an individual program single instruction stream parallelism is greater than two but the average parallelism at instruction level is around fine rarely exceeding seven in an ordinary program.

    For scientific applications, the average parallel is in the range of 500 to 300 Fortran statements executing concurrently in an idealized environment.

  • Loop Level − It embraces iterative loop operations. A loop may contain fewer than 500 instructions. Some loop independent operations can be vectorized for pipelined execution or look step execution of SIMD machines.

    Loop level parallelism is the most optimized program generate to implement on a parallel or vector computer. But recursive loops are different to parallelize. Vector processing is mostly exploited at the loop level by vectorizing compiler.

  • Procedural Level − It communicates to medium grain size at the task, procedure, subroutine levels. Grain at this level has less than 2000 instructions. Detection of parallelism at this level is much more difficult than a finer grain level.

    Communication obligation is much less as compared with that MIMD execution model. But here major efforts are requisite by the programmer to reorganize a program at this level.

  • Subprogram Level − Subprogram level communicates to job steps and related subprograms. Grain size here has less than 1000 instructions. Job steps can overlap across diverse jobs. Multiprogramming a uniprocessor or multiprocessor is conducted at this level.

  • Job Level − It corresponds to parallel executions of independent tasks on a parallel computer. Grain size here can be tens of thousands of instructions. It is managed by the program loader and by the operating framework. Time-sharing & space-sharing multiprocessors analyze this level of parallelism.

raja
Published on 30-Jul-2021 13:43:43
Advertisements