Prime and Modulate Learning: Generation of forward models with signed back-propagation and environmental cues

09/07/2023
by   Sama Daryanavard, et al.
0

Deep neural networks employing error back-propagation for learning can suffer from exploding and vanishing gradient problems. Numerous solutions have been proposed such as normalisation techniques or limiting activation functions to linear rectifying units. In this work we follow a different approach which is particularly applicable to closed-loop learning of forward models where back-propagation makes exclusive use of the sign of the error signal to prime the learning, whilst a global relevance signal modulates the rate of learning. This is inspired by the interaction between local plasticity and a global neuromodulation. For example, whilst driving on an empty road, one can allow for slow step-wise optimisation of actions, whereas, at a busy junction, an error must be corrected at once. Hence, the error is the priming signal and the intensity of the experience is a modulating factor in the weight change. The advantages of this Prime and Modulate paradigm is twofold: it is free from normalisation and it makes use of relevant cues from the environment to enrich the learning. We present a mathematical derivation of the learning rule in z-space and demonstrate the real-time performance with a robotic platform. The results show a significant improvement in the speed of convergence compared to that of the conventional back-propagation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2020

Closed-loop deep learning: generating forward models with back-propagation

A reflex is a simple closed loop control approach which tries to minimis...
research
10/14/2021

Sign and Relevance learning

Standard models of biologically realistic, or inspired, reinforcement le...
research
08/18/2022

Lifted Bregman Training of Neural Networks

We introduce a novel mathematical formulation for the training of feed-f...
research
06/22/2020

Bidirectional Self-Normalizing Neural Networks

The problem of exploding and vanishing gradients has been a long-standin...
research
11/24/2021

Information Bottleneck-Based Hebbian Learning Rule Naturally Ties Working Memory and Synaptic Updates

Artificial neural networks have successfully tackled a large variety of ...
research
10/23/2019

Stabilising priors for robust Bayesian deep learning

Bayesian neural networks (BNNs) have developed into useful tools for pro...
research
06/04/2020

Refined Continuous Control of DDPG Actors via Parametrised Activation

In this paper, we propose enhancing actor-critic reinforcement learning ...

Please sign up or login with your details

Forgot password? Click here to reset