
- Neuromorphic Computing - Home
- Neuromorphic Computing - Introduction
- Neuromorphic Computing - Difference From Traditional Computing
- Neuromorphic Computing - History and Evolution
- Neuromorphic Computing - Types of Technologies
- Neuromorphic Computing - Architecture
- Neuromorphic Computing - Memristors
- Neuromorphic Computing - Synaptic Devices
- Neuromorphic Computing - Hardware Accelerators
- Neuromorphic Computing - Neuromorphic Chips
- Neuromorphic Computing - Analog Circuits
- Neuromorphic Algorithms and Programming
- Neuromorphic Computing - Spiking Neural Networks (SNNs)
- Neuromorphic Computing - Algorithms for SNNs
- Neuromorphic Computing - Programming Paradigms
- Applications of Neuromorphic Computing
- Neuromorphic Computing - Edge Computing
- Neuromorphic Computing - IoT
- Neuromorphic Computing - Robotics
- Neuromorphic Computing - Autonomous Systems
- Neuromorphic Computing - AI and ML
- Neuromorphic Computing - Cognitive Computing
- Neuromorphic Computing Resources
- Neuromorphic Computing - Useful Resources
- Neuromorphic Computing - Discussion
Neuromorphic Computing - Spiking Neural Networks (SNNs)
Spiking Neural Networks (SNNs) are special type of neural network that work same as biological neurons by using discrete, time-dependent spikes to transmit and process information. The hardware component memristors are used for implementation of SNNs, as they can adjust their resistance based on spike timings and incoming signals. In this section we will explain detailed overview on SNN algorithm and compare it with ANN algorithm.
SNN Learning Mechanisms
Learning in SNNs relies on the timing of spikes and the plasticity of synaptic connections. In general there are two types of learning mechanisms:
Spike-Time-Dependent Plasticity (STDP)
Spike timing-dependent plasticity (STDP) is an unsupervised learning algorithm that is based on a hebbian rule can be summarized as "if two connected neurons fire at the same time, the weight of the synapse between them should be strengthened". However, if a presynaptic spike precedes a postsynaptic spike, the weight of the synapse is either strengthened or weakened.
Memristors are two terminal electrical components that can emulate this behavior by adjusting their resistance in accordance with these timing rules. Learn more about memristors here. .
Spike-Based Backpropagation
Spike-based backpropagation is a method used to train SNNs by adjusting synaptic weights based on the timing and occurrence of spikes, similar to how backpropagation is used in traditional ANNs. This method will take long training times, as networks need to conduct forward passes frequently, which on standard computer hardware takes a long time even with parallelization.
Backpropagation cannot be used directly on SNNs due to the problem of non differentiable neuron equation. Because of this, the derivative needs to be approximated in order for backpropagation to work. These approximations can be made around spiking time, membrane potential, ReLU activation function, or even the STDP mechanism.
SNN Models
Several neuron models are used in SNNs, including:
- Leaky Integrate-and-Fire (LIF): This is a common model that captures the essential dynamics of neurons. It integrates incoming spikes and generates an output spike when a certain threshold is reached, followed by a reset period.
- Izhikevich Model: A more biologically realistic model that combines the efficiency of simple models with the richness of more complex spiking behaviors seen in biological neurons.
- Hodgkin-Huxley Model: This is a detailed model that simulates the ionic mechanisms underlying the generation of action potentials in biological neurons.
BNN vs ANN vs SNN
The following table shows difference of Artificial Neural Networks from Spiking Neural Networks and Biological Neural Networks
Properties | Biological Neural Networks | Artificial Neural Networks | Spiking Neural Networks |
---|---|---|---|
Information Representation | Spikes | Scalars | Spikes |
Learning Paradigm | Synaptic Plasticity | Backpropagation | Plasticity/Backpropagation |
Energy Efficiency | Very high (natural systems) | Moderate | High (event-driven computation) |
Computation Style | Asynchronous (spike-timing) | Synchronous (continuous) | Asynchronous (spike-timing) |
Platform | Brain | VLSI | Neuromorphic VLSI |
ANN-to-SNN Conversion
Converting a traditional ANN into an SNN involves translating the continuous outputs of the ANN into discrete spikes that SNNs can process. This can be achieved by mapping the ANNs activation values onto spike rates, a process that allows the converted SNN to maintain the functionality of the original ANN while benefiting from the energy efficiency and event-driven nature of spiking networks.