A novel adaptive learning rate scheduler for deep neural networks

02/20/2019
by   Rahul Yedida, et al.
0

Optimizing deep neural networks is largely thought to be an empirical process, requiring manual tuning of several parameters, such as learning rate, weight decay, and dropout rate. Arguably, the learning rate is the most important of these to tune, and this has gained more attention in recent works. In this paper, we propose a novel method to compute the learning rate for training deep neural networks. We derive a theoretical framework to compute learning rates dynamically, and then show experimental results on standard datasets and architectures to demonstrate the efficacy of our approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset