Low Rank Saddle Free Newton: Algorithm and Analysis

02/07/2020
by   Thomas O'Leary-Roseberry, et al.
0

Many tasks in engineering fields and machine learning involve minimizing a high dimensional non-convex function. The existence of saddle points poses a central challenge in practice. The Saddle Free Newton (SFN) algorithm can rapidly escape high dimensional saddle points by using the absolute value of the Hessian of the empirical risk function. In SFN, a Lanczos type procedure is used to approximate the absolute value of the Hessian. Motivated by recent empirical works that note neural network training Hessians are typically low rank, we propose using approximation via scalable randomized low rank methods. Such factorizations can be efficiently inverted via Sherman Morrison Woodbury formula. We derive bounds for convergence rates in expectation for a stochastic version of the algorithm, which quantify errors incurred in subsampling as well as in approximating the Hessian via low rank factorization. We test the method on standard neural network training benchmark problems: MNIST and CIFAR10. Numerical results demonstrate that in addition to avoiding saddle points, the method can converge faster than first order methods, and the Hessian can be subsampled significantly relative to the gradient and retain superior performance for the method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset