Direct Feedback Alignment Provides Learning in Deep Neural Networks

09/06/2016
by   Arild Nøkland, et al.
0

Artificial neural networks are most commonly trained with the back-propagation algorithm, where the gradient for learning is provided by back-propagating the error, layer by layer, from the output layer to the hidden layers. A recently discovered method called feedback-alignment shows that the weights used for propagating the error backward don't have to be symmetric with the weights used for propagation the activation forward. In fact, random feedback weights work evenly well, because the network learns how to make the feedback useful. In this work, the feedback alignment principle is used for training hidden layers more independently from the rest of the network, and from a zero initial condition. The error is propagated through fixed random feedback connections directly from the output layer to each hidden layer. This simple method is able to achieve zero training error even in convolutional networks and very deep networks, completely without error back-propagation. The method is a step towards biologically plausible machine learning because the error signal is almost local, and no symmetric or reciprocal weights are required. Experiments show that the test performance on MNIST and CIFAR is almost as good as those obtained with back-propagation for fully connected networks. If combined with dropout, the method achieves 1.45 permutation invariant MNIST task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2018

Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning

We introduce Error Forward-Propagation, a biologically plausible mechani...
research
02/23/2017

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

The back-propagation (BP) algorithm has been considered the de-facto met...
research
01/27/2022

Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

Supervised learning in artificial neural networks typically relies on ba...
research
09/03/2019

Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

While the backpropagation of error algorithm allowed for a rapid rise in...
research
09/30/2021

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

We develop biologically plausible training mechanisms for self-supervise...
research
06/11/2019

Principled Training of Neural Networks with Direct Feedback Alignment

The backpropagation algorithm has long been the canonical training metho...
research
01/06/2023

Feedback-Gated Rectified Linear Units

Feedback connections play a prominent role in the human brain but have n...

Please sign up or login with your details

Forgot password? Click here to reset