Suppose, Moni have to attend an invitation. Moni’s another two friend Diya and Hena are also invited. There are conditions that all three friends have to go there separately and all of them have to be present at door to get into the hall. Now Moni is coming by car, Diya by bus and Hena is coming by foot. Now, how fast Moni and Diya can reach there it doesn’t matter, they have to wait for Hena. So to speed up the overall process, we need to concentrate on the performance of Hena other than Moni or Diya.
This is actually happening in Amdahl’s Law. It relates the improvement of the system’s performance with the parts that didn’t perform well, like we need to take care of the performance of that parts of the systems. This law often used in parallel computing to predict the theoretical speedup when using multiple processors.
Amdahl’s Law can be expressed in mathematically as follows −
SpeedupMAX = 1/((1-p)+(p/s))
SpeedupMAX = maximum performance gain
s = performance gain factor of p after implement the enhancements.
p = the part which performance needs to be improved.
Let’s take an example, if the part that can be improved is 30% of the overall system and its performance can be doubled for a system, then −
SpeedupMAX = 1/((1-0.30)+(0.30/2))
Now, in another example, if the part that can be improved is 70% of the overall system and its performance can be doubled for a system, then −
So, we can see, if 1-p can’t be improved, the overall performance of the system cannot be improved so much. So, if 1-p is 1/2, then speed cannot go beyond that, no matter how many processors are used.
Multicore programming is most commonly used in signal processing and plant-control systems. In signal processing, one can have a concurrent system that processes multiple frames in parallel. the controller and the plant can execute as two separate tasks, in plant-control systems.
Multicore programming helps to split the system into multiple parallel tasks, which run simultaneously, speeding up the overall execution time.