What is shared-memory model in computer architecture?

A shared memory model is one in which processors connects by reading and writing locations in a shared memory that is similarly applicable by all processors. Each processor can have registers, buffers, caches, and local memory banks as more memory resources. Some basic issues in the design of shared-memory systems have to be taken into consideration. These involves access control, synchronization, protection, and security.

Access control specifies which process accesses are achievable to which resources. Access control models create the needed check for each access request issued by the processors to the shared memory, against the contents of the access control table. The latter contains flags that decide the legality of each access attempt. If there are access attempts to resources, then until the desired access is finished, all disallowed access attempts and illegal processes are blocked. Requests from sharing processes can modify the contents of the access control table during execution.

The flags of the access control with the synchronization rules specifies the system’s functionality. Synchronization constraints limit the time of access from sharing processes to shared resources. Appropriate synchronization provides that the data flows properly and provides system functionality.

Protection is a system feature that avoids processes from creating arbitrary access to resources belonging to other processes. Sharing and protection are incompatible; sharing enable access, whereas protection confine it.

The simplest shared memory system includes one memory module that can be created from two processors. Requests appear at the memory module through its two ports. An arbitration unit within the memory module passes requests through to a memory controller.

If the memory module is not busy and a single request arrives, then the arbitration unit passes that request to the memory controller, and the request is granted. The module is placed in a busy state while a request is being serviced. If a new request arrives while the memory is busy servicing a previous request, the requesting processor may hold its request on the line until the memory becomes free or it may repeat its request sometime later.

A shared memory system leads to systems that can be classified as uniform memory access (UMA), non-uniform memory access (NUMA), and cache-only memory architecture (COMA).

In the UMA system, shared memory is accessible by all processors through an interconnection network in the same way a single processor accesses its memory. Therefore, all processors have equal access time to any memory location. The interconnection network used in the UMA can be a single bus, multiple buses, a crossbar, or a multiport memory.

In the NUMA system, each processor has part of the shared memory attached. The memory has a single address space. Therefore, any processor could access any memory location directly using its real address. However, the access time to modules depends on the distance to the processor. This results in a non-uniform memory access time.

Updated on: 24-Jul-2021

5K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started