Front Contribution instead of Back Propagation

by   Swaroop Mishra, et al.

Deep Learning's outstanding track record across several domains has stemmed from the use of error backpropagation (BP). Several studies, however, have shown that it is impossible to execute BP in a real brain. Also, BP still serves as an important and unsolved bottleneck for memory usage and speed. We propose a simple, novel algorithm, the Front-Contribution algorithm, as a compact alternative to BP. The contributions of all weights with respect to the final layer weights are calculated before training commences and all the contributions are appended to weights of the final layer, i.e., the effective final layer weights are a non-linear function of themselves. Our algorithm then essentially collapses the network, precluding the necessity for weight updation of all weights not in the final layer. This reduction in parameters results in lower memory usage and higher training speed. We show that our algorithm produces the exact same output as BP, in contrast to several recently proposed algorithms approximating BP. Our preliminary experiments demonstrate the efficacy of the proposed algorithm. Our work provides a foundation to effectively utilize these presently under-explored "front contributions", and serves to inspire the next generation of training algorithms.


page 14

page 15


Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

The back-propagation (BP) algorithm has been considered the de-facto met...

Deep Layer-wise Networks Have Closed-Form Weights

There is currently a debate within the neuroscience community over the l...

Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

Least mean squares (LMS) is a particular case of the backpropagation (BP...

Using Artificial Bee Colony Algorithm for MLP Training on Earthquake Time Series Data Prediction

Nowadays, computer scientists have shown the interest in the study of so...

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

The backpropagation of error algorithm (BP) is often said to be impossib...

Towards truly local gradients with CLAPP: Contrastive, Local And Predictive Plasticity

Back-propagation (BP) is costly to implement in hardware and implausible...

Research on the inverse kinematics prediction of a soft actuator via BP neural network

In this work we address the inverse kinetics problem of motion planning ...

Please sign up or login with your details

Forgot password? Click here to reset