Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler

by   Yu Ji, et al.

Different from training common neural networks (NNs) for inference on general-purpose processors, the development of NNs for neuromorphic chips is usually faced with a number of hardware-specific restrictions, including the limited precision of network signals and parameters, constrained computation scale and limited types of non-linear functions. This paper proposes a general methodology to address the challenge. It can transform an existing trained, unrestricted NN (usually for software execution substrate) into an equivalent network that meets the given hardware constraints, which decouples NN applications from target hardware. Formally, the original NN is expressed as a computational graph (CG) that would be fine-tuned part by part according to a topological ordering to become the target CG. Quite a few techniques, including the multilayer-perceptron(MLP)-based universal approximator, a data re-encoding method, a split-and-merge network reconstruction method and a multi-phase weight-tuning algorithm, are proposed to conquer the above restrictions respectively. We have built such a software tool that supports both spiking neural networks (SNNs) and traditional artificial neural networks (ANNs). Its effectiveness has been demonstrated with a real neuromorphic chip and a processing-in-memory (PIM) design. Tests show that the extra inference error caused by this solution is very limited and the transformation time is much less than the retraining time. In addition, quite a few parameter-sensitivity evaluations have been completed to explore the tradeoff between network error, resource consumption and different transformation strategies, which could provide insights for co-design optimization of neuromorphic hardware and software.


page 4

page 6


Training and Deploying Spiking NN Applications to the Mixed-Signal Neuromorphic Chip Dynap-SE2 with Rockpool

Mixed-signal neuromorphic processors provide extremely low-power operati...

Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

With more and more event-based neuromorphic hardware systems being devel...

Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

Emulating spiking neural networks on analog neuromorphic hardware offers...

Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

Advancing the size and complexity of neural network models leads to an e...

hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2

Neuromorphic systems require user-friendly software to support the desig...

Evolving Connectivity for Recurrent Spiking Neural Networks

Recurrent spiking neural networks (RSNNs) hold great potential for advan...

An effective algorithm for hyperparameter optimization of neural networks

A major challenge in designing neural network (NN) systems is to determi...

Please sign up or login with your details

Forgot password? Click here to reset