Blind Descent: A Prequel to Gradient Descent

06/20/2020
by   Akshat Gupta, et al.
0

We describe an alternative to gradient descent for backpropogation through a neural network, which we call Blind Descent. We believe that Blind Descent can be used to augment backpropagation by using it as an initialisation method and can also be used at saturation. Blind Descent, inherently by design, does not face problems like exploding or vanishing gradients.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2018

Secondary gradient descent in higher codimension

In this paper, we analyze discrete gradient descent and ϵ-noisy gradient...
research
01/25/2018

A New Backpropagation Algorithm without Gradient Descent

The backpropagation algorithm, which had been originally introduced in t...
research
06/11/2019

Power Gradient Descent

The development of machine learning is promoting the search for fast and...
research
02/18/2021

Attempted Blind Constrained Descent Experiments

Blind Descent uses constrained but, guided approach to learn the weights...
research
07/22/2019

Channel Normalization in Convolutional Neural Network avoids Vanishing Gradients

Normalization layers are widely used in deep neural networks to stabiliz...
research
06/16/2020

Cogradient Descent for Bilinear Optimization

Conventional learning methods simplify the bilinear model by regarding t...
research
06/14/2023

Convergence properties of gradient methods for blind ptychography

We consider blind ptychography, an imaging technique which aims to recon...

Please sign up or login with your details

Forgot password? Click here to reset