What is UMA?


UMA represents Uniform memory access. It is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In UMA architecture, access time to a memory location is autonomous of which processor creates the request or which memory chip includes the shared data.

Although the UMA architecture is not suitable for building scalable parallel computers, it is excellent for constructing small-size single bus multiprocessors. Two such machines are the Encore Multimax of Encore Computer Corporation representing the technology of the late 1980s and the Power Challenge of Silicon Graphics Computing Systems representing the technology of the 1990s.

Encore Multimax

The most advanced feature of the Encore Multimax, when it appeared on the market, was the Nanobus which was one of the first commercial applications of a pended bus. Unlike in many locked buses, the address and data buses are separated in the Nanobus.

The address bus initiates both memory read and memory write transfers on the Nanobus. In the case of write transactions, the data bus is used together with the address bus, while in a read transaction the data bus can be used by a memory unit to transfer the result of previous read access.

Separate but co-operating arbiter logics are employed to allocate the address and data bus among the 20 processors and 16 memory banks. A centralized arbiter is used to realize a fair round-robin arbitration policy for the address bus.

However, the work of the centralized address bus arbiter can be influenced by distributed access control mechanisms under certain conditions. If a processor or memory controller cannot gain control over the address bus for a certain number of bus cycles, they can use special bus selection lines to force the central arbiter to deny access to the address bus for other bus masters.

The next feature of the Encore Multimax is the application of pipelining both on the processor board and the memory boards. Pipelining enables the processor to start a new bus cycle before finishing the previous one, and for the memory controller to receive a new memory access request before completing the servicing of the previous one. Pipelining is implemented by applying buffer registers on the processor board and the memory board.

Power challenge

The heart of the Power Challenge multiprocessor is the POWERpath-2 split transaction shared bus. The associative memory used for splitting read transactions is constructed from eight so-called read resources, that is, up to eight reads can be outstanding at any time.

The POWERpath-2 bus was designed according to RISC philosophy. The types and variations of bus transactions are small, and each transaction requires the same five bus cycles: arbitration, resolution, address, decode, acknowledge. These five cycles are executed synchronously by each bus controller.

Updated on: 23-Jul-2021

3K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements