
- Neuromorphic Computing - Home
- Neuromorphic Computing - Introduction
- Neuromorphic Computing - Difference From Traditional Computing
- Neuromorphic Computing - History and Evolution
- Neuromorphic Computing - Types of Technologies
- Neuromorphic Computing - Architecture
- Neuromorphic Computing - Memristors
- Neuromorphic Computing - Synaptic Devices
- Neuromorphic Computing - Hardware Accelerators
- Neuromorphic Computing - Neuromorphic Chips
- Neuromorphic Computing - Analog Circuits
- Neuromorphic Algorithms and Programming
- Neuromorphic Computing - Spiking Neural Networks (SNNs)
- Neuromorphic Computing - Algorithms for SNNs
- Neuromorphic Computing - Programming Paradigms
- Applications of Neuromorphic Computing
- Neuromorphic Computing - Edge Computing
- Neuromorphic Computing - IoT
- Neuromorphic Computing - Robotics
- Neuromorphic Computing - Autonomous Systems
- Neuromorphic Computing - AI and ML
- Neuromorphic Computing - Cognitive Computing
- Neuromorphic Computing Resources
- Neuromorphic Computing - Useful Resources
- Neuromorphic Computing - Discussion
Neuromorphic Computing - Architecture
The architecture of Neuromorphic Computers are inspired from functioning of human brain, where neuron and synapses work together as a single unit for storing and processing data. In this section, we will discuss key components, working principles and examples of neuromorphic architecture.

Key Components of Neuromorphic Architecture
- Neurons: The fundamental building blocks of neuromorphic systems, same as that in biological neurons. They process information and communicate through electrical spikes.
- Synapses: Connections between neurons that allow for the transmission of signals. Neuromorphic systems utilize variable strength connections to mimic synaptic plasticity, enabling learning and memory formation.
- Layers of Neurons: Just as the human brain is organized into layers, neuromorphic architectures typically have multiple layers of interconnected neurons, allowing for complex processing and representation of information.
How Neuromorphic Computer Works?
To understand how neuromorphic computer works, first you need to understand how neocortex in brain functions. Neocortex is a part of brain where higher cognitive functions like sensory perception, motor commands, spatial reasoning and language are thought to occur.
The neocortex is made up of neurons and synapses that send and carry information from the brain with instantaneous speed and incredible efficiency. Neuromorphic computers achieved this efficiency by using Spiking Neural Networks(SNNs). Learn more about SNNs here.
Example
Here's a simple Python-based code example to explain the concept behind neuromorphic computing. This example will simulate how artificial neurons communicate with each other using spikes, as seen in neuromorphic systems. We'll use a spiking neural network model with neurons that fire based on a threshold, and synaptic weights adjust based on a simplified learning rule.
import numpy as np # Define parameters NUM_NEURONS = 5 # Number of neurons in the network THRESHOLD = 1.0 # Firing threshold for neurons LEARNING_RATE = 0.1 # Rate at which synaptic weights are adjusted TIMESTEPS = 10 # Number of simulation steps # Initialize synaptic weights (connections between neurons) synaptic_weights = np.random.rand(NUM_NEURONS, NUM_NEURONS) # Initialize neuron states (0 means no spike, 1 means spike) neuron_states = np.zeros(NUM_NEURONS) # Simple learning rule: Update synapse weights based on spikes def update_weights(pre_neuron, post_neuron): if neuron_states[pre_neuron] == 1: # Pre-neuron spiked if neuron_states[post_neuron] == 1: # Post-neuron also spiked # Strengthen the synapse (positive reinforcement) synaptic_weights[pre_neuron, post_neuron] += LEARNING_RATE else: # Weaken the synapse (negative reinforcement) synaptic_weights[pre_neuron, post_neuron] -= LEARNING_RATE synaptic_weights[pre_neuron, post_neuron] = np.clip(synaptic_weights[pre_neuron, post_neuron], 0, 1) # Simulation loop for t in range(TIMESTEPS): print(f"Timestep {t + 1}:") # Simulate neuron input as random values inputs = np.random.rand(NUM_NEURONS) # Update neuron states based on inputs and synaptic weights for neuron in range(NUM_NEURONS): # Calculate total input to the neuron from synapses total_input = np.dot(synaptic_weights[:, neuron], neuron_states) + inputs[neuron] # Determine if the neuron fires (spikes) if total_input > THRESHOLD: neuron_states[neuron] = 1 # Neuron fires else: neuron_states[neuron] = 0 # Neuron does not fire print(f"Neuron states: {neuron_states}") print(f"Synaptic weights: \n{synaptic_weights}\n") # Update synaptic weights based on spikes (learning) for pre_neuron in range(NUM_NEURONS): for post_neuron in range(NUM_NEURONS): update_weights(pre_neuron, post_neuron)
Explanation
This is a simple python code that implements a Hebbian learning rule. Hebbian learning is a biologically inspired learning rule that states that neurons that fire together wire together. In other words, if two neurons are active at the same time, the connection between them is strengthened. The network consists of multiple neurons connected by synapses (with weights), and each neuron can "fire" or "spike" based on the total input it receives.
- Define Parameters: Initially we define all the parameters like number of neurons, threshold and learning rate for the neural network.
- Learning Rule: The function update_weights(pre_neuron, post_neuron): updates the synaptic weight between two neurons. If the pre-neuron spikes and the post-neuron spikes, the weight is increased (positive reinforcement). If only the pre-neuron spikes, the weight is decreased (negative reinforcement). In the other word this function achieves fire together wire together function mentioned above.
- Simulation Loop: At each time step, random input values are generated for each neuron. This input simulates external stimulation of the neurons.
Examples of Neuromorphic Chips
- IBM TrueNorth: A digital neuromorphic chip designed to simulate a million neurons and over 256 million synapses, enabling complex neural network computations.
- Intel Loihi: A research chip that uses spiking neural network models for on-chip learning and real-time processing capabilities.
- SpiNNaker: A neuromorphic computing platform designed to simulate large-scale brain-like computations, utilizing thousands of low-power processing cores.