Intelligent gradient amplification for deep neural networks

by   Sunitha Basodi, et al.

Deep learning models offer superior performance compared to other machine learning techniques for a variety of tasks and domains, but pose their own challenges. In particular, deep learning models require larger training times as the depth of a model increases, and suffer from vanishing gradients. Several solutions address these problems independently, but there have been minimal efforts to identify an integrated solution that improves the performance of a model by addressing vanishing gradients, as well as accelerates the training process to achieve higher performance at larger learning rates. In this work, we intelligently determine which layers of a deep learning model to apply gradient amplification to, using a formulated approach that analyzes gradient fluctuations of layers during training. Detailed experiments are performed for simpler and deeper neural networks using two different intelligent measures and two different thresholds that determine the amplification layers, and a training strategy where gradients are amplified only during certain epochs. Results show that our amplification offers better performance compared to the original models, and achieves accuracy improvement of around 2.5 and around 4.5 higher learning rates.


page 1

page 2

page 3

page 4


Gradient Amplification: An efficient way to train deep neural networks

Improving performance of deep learning models and reducing their trainin...

Vanishing Nodes: Another Phenomenon That Makes Training Deep Neural Networks Difficult

It is well known that the problem of vanishing/exploding gradients is a ...

Input Fast-Forwarding for Better Deep Learning

This paper introduces a new architectural framework, known as input fast...

MixNN: A design for protecting deep learning models

In this paper, we propose a novel design, called MixNN, for protecting d...

Elastic Neural Networks for Classification

In this work we propose a framework for improving the performance of any...

Activated Gradients for Deep Neural Networks

Deep neural networks often suffer from poor performance or even training...

Gradient Derivation for Learnable Parameters in Graph Attention Networks

This work provides a comprehensive derivation of the parameter gradients...

Please sign up or login with your details

Forgot password? Click here to reset