Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

by   Mingqing Xiao, et al.
Peking University
The Chinese University of Hong Kong, Shenzhen

Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware. However, the supervised training of SNNs remains a hard problem due to the discontinuity of the spiking neuron model. Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks, and use surrogate derivatives or compute gradients with respect to the spiking time to deal with the problem. These approaches either accumulate approximation errors or only propagate information limitedly through existing spikes, and usually require information propagation along time steps with large memory costs and biological implausibility. In this work, we consider feedback spiking neural networks, which are more brain-like, and propose a novel training method that does not rely on the exact reverse of the forward computation. First, we show that the average firing rates of SNNs with feedback connections would gradually evolve to an equilibrium state along time, which follows a fixed-point equation. Then by viewing the forward computation of feedback SNNs as a black-box solver for this equation, and leveraging the implicit differentiation on the equation, we can compute the gradient for parameters without considering the exact forward procedure. In this way, the forward and backward procedures are decoupled and therefore the problem of non-differentiable spiking functions is avoided. We also briefly discuss the biological plausibility of implicit differentiation, which only requires computing another equilibrium. Extensive experiments on MNIST, Fashion-MNIST, N-MNIST, CIFAR-10, and CIFAR-100 demonstrate the superior performance of our method for feedback models with fewer neurons and parameters in a small number of time steps. Our code is avaiable at


page 1

page 2

page 3

page 4


SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks

Spiking neural networks (SNNs) with event-based computation are promisin...

SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit Differentiation

Large language Models (LLMs), though growing exceedingly powerful, compr...

Energy Efficient Training of SNN using Local Zeroth Order Method

Spiking neural networks are becoming increasingly popular for their low ...

State-driven Implicit Modeling for Sparsity and Robustness in Neural Networks

Implicit models are a general class of learning models that forgo the hi...

Online Training Through Time for Spiking Neural Networks

Spiking neural networks (SNNs) are promising brain-inspired energy-effic...

EXODUS: Stable and Efficient Training of Spiking Neural Networks

Spiking Neural Networks (SNNs) are gaining significant traction in machi...

DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation

We present DeblurSR, a novel motion deblurring approach that converts a ...

Please sign up or login with your details

Forgot password? Click here to reset