Direct Feedback Alignment with Sparse Connections for Local Learning

01/30/2019
by   Brian Crafton, et al.
0

Recent advances in deep neural networks (DNNs) owe their success to training algorithms that use backpropagation and gradient-descent. Backpropagation, while highly effective on von Neumann architectures, becomes inefficient when scaling to large networks. Commonly referred to as the weight transport problem, each neuron's dependence on the weights and errors located deeper in the network require exhaustive data movement which presents a key problem in enhancing the performance and energy-efficiency of machine-learning hardware. In this work, we propose a bio-plausible alternative to backpropagation drawing from advances in feedback alignment algorithms in which the error computation at a single synapse reduces to the product of three scalar values, satisfying the three factor rule. Using a sparse feedback matrix, we show that a neuron needs only a fraction of the information previously used by the feedback alignment algorithms to yield results which are competitive with backpropagation. Consequently, memory and compute can be partitioned and distributed whichever way produces the most efficient forward pass so long as a single error can be delivered to each neuron. We evaluate our algorithm using standard data sets, including ImageNet, to address the concern of scaling to challenging problems. Our results show orders of magnitude improvement in data movement and 2x improvement in multiply-and-accumulate operations over backpropagation. All the code and results are available under https://github.com/bcrafton/ssdfa.

READ FULL TEXT

page 3

page 5

page 11

page 12

research
06/10/2021

Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

Stochastic gradient descent with backpropagation is the workhorse of art...
research
06/12/2020

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

The state-of-the art machine learning approach to training deep neural n...
research
12/14/2022

Directional Direct Feedback Alignment: Estimating Backpropagation Paths for Efficient Learning on Neural Processors

The error Backpropagation algorithm (BP) is a key method for training de...
research
12/11/2020

Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment

The scaling hypothesis motivates the expansion of models past trillions ...
research
10/26/2022

Scaling Laws Beyond Backpropagation

Alternatives to backpropagation have long been studied to better underst...
research
02/04/2022

Backpropagation Neural Tree

We propose a novel algorithm called Backpropagation Neural Tree (BNeural...
research
10/19/2018

Gradient target propagation

We report a learning rule for neural networks that computes how much eac...

Please sign up or login with your details

Forgot password? Click here to reset