The Impact of Local Geometry and Batch Size on the Convergence and Divergence of Stochastic Gradient Descent

09/14/2017
by   Vivak Patel, et al.
0

Stochastic small-batch (SB) methods, such as mini-batch Stochastic Gradient Descent (SGD), have been extremely successful in training neural networks with strong generalization properties. In the work of Keskar et. al (2017), an SB method's success in training neural networks was attributed to the fact it converges to flat minima---those minima whose Hessian has only small eigenvalues---while a large-batch (LB) method converges to sharp minima---those minima whose Hessian has a few large eigenvalues. Commonly, this difference is attributed to the noisier gradients in SB methods that allow SB iterates to escape from sharp minima. While this explanation is intuitive, in this work we offer an alternative mechanism. In this work, we argue that SGD escapes from or converges to minima based on a deterministic relationship between the learning rate, the batch size, and the local geometry of the minimizer. We derive the exact relationships by a rigorous mathematical analysis of the canonical quadratic sums problem. Then, we numerically study how these relationships extend to nonconvex, stochastic optimization problems. As a consequence of this work, we offer a more complete explanation of why SB methods prefer flat minima and LB methods seem agnostic, which can be leveraged to design SB and LB training methods that have tailored optimization properties.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset