What is Parallel Execution in Computer Architecture?

Computer ArchitectureComputer ScienceNetwork

When instructions are executed in parallel, they will be completed in out-of-program order. Here, it does not matter whether instructions are issued or dispatched in order or out-of-order, or whether shelving is used or not. The point is that unequal execution times force instructions to finish out-of-order, even if they are issued (and dispatched) in order. Then short, ‘younger’ instruction can be completed previous than long, ‘older’ ones. Thus, superscalar instructions give rise to an out-of-order finishing of instructions.

Here, it can make a distinction between the terms ‘to finish’, ‘to complete’, and ‘to retire’ an instruction. The term ‘to finish’ an instruction if denotes that the required operation of the instruction is adept, except for writing back the result into the architectural register or memory location represented and refreshing the status bits.

In contrast, the term ‘to complete’ an instruction if we want to refer to the last action of instruction execution, which is writing back the result into the referenced architectural register. Finally, in connection with the ROB, instead of the term ‘to complete’ we say ‘to retire’ since in this case, two tasks have to be performed, to write back the result and to delete the completed instruction from the last ROB entry.

Under the special condition, instructions finishing out of order can be avoided despite multiple EUs. The conditions are as follows such as instructions must be issued in order and all the EUs operating in parallel must have equal execution times.

These conditions can be fulfilled by using a dual pipeline and lock-stepping them, that is, lengthening the shorter pipeline by introducing unused extra cycles (‘bubbles’) into it. These prerequisites are overly restrictive and impede performance.

Therefore, there are only a few superscalar processors which avoid out-of-order completion in this way. Examples are the MC 68060 and the Pentium, both of which employ lock-stepped dual pipelines.

History of Parallel Execution

The activity in parallel computing dates returned to the late 1950s, with developments emerging in the form of supercomputers during the ‘60’s and ‘70’s. These have been shared-memory multiprocessors, with more than one processor working side-by-side on shared information.

In the mid-1980s, a new form of parallel computing was released when the Caltech Concurrent Computation project develop a supercomputer for scientific purposes from 64 Intel 8086/8087 processors.

In today, parallel computing is turning into mainstream depends on multi-core processors. Most desktop and laptop systems now dispatch with dual-core microprocessors, with quad-core processors freely accessible. Chip manufacturers have started to improve complete processing implementation by adding extra CPU cores.

Published on 23-Jul-2021 07:31:40