What is Scalable Coherent Interface?


The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high-implementation multiprocessors. It can be providing a cache-coherent-memory model extensible to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will amount to a peak bandwidth per node of 1 GigaByte/second.

The major purpose of the SCI standard is to provide a memory-address-based, cache-coherent communication scheme for creating scalable parallel machines with a large number of processors. The SCI coherency protocol supports a scalable linked list design of distributed directories.

The cache mechanism ensures a simultaneously linked list of modifications by all the processors in a shared list for maximum concurrency. There are no locks and no support choke points in the protocol, enabling it to scale with the multiple processors linearly.

All SCI connections are organized in packets. The base protocol is a write-back and invalidates type that provides forward progress, delivery, integrity, basic error detection, and recovery. The SCI organization is based on the so-called sharing list in which each coherent cache block is chained up.

The head element of the list points to the memory block where the associated memory line is stored. The memory tag in each block of the memory is extended with the identifier of the head node of the sharing-list (forw_id field) and a 2-bit memory state (mstate field).

The cache tags of blocks are extended with two identifiers: one for the previous node (back-id field) and one for the following node (forw_id field) of the sharing list.

There are four operations are defined in the shared list which is as follows −

  • Creation − This is a simplified version of the insertion operation when the associated sharing list is empty.

  • Insertion − When a cache miss occurs, the node sends a prepend request to the memory which responds with the pointer of the old head node and refreshes its head node pointer with the address of the requester. After receiving the response from the memory, the new head notifies the old head node by sending a new head request. The old head updates its backward pointer and returns its data field.

  • Deletion − When a node wants to remove its cache line from the sharing list, it sends two messages. First, it sends an update backward request containing its backward pointer (Pb) to its successor. The successor will update its backward pointer by the value received in the request.

The second message, an update forward request, is sent to the predecessor. This message contains the forward pointer of the requester and is used by the predecessor to update its forward pointer.

  • Reduction to a single node − When a cache line is written, all the other elements of the sharing list must be invalidated. Only the head node has the right to update a cache line and to remove the other elements from the sharing list. This is the reduction process and it is performed sequentially.

Updated on: 23-Jul-2021

416 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements