Unconstrained optimisation on Riemannian manifolds

08/25/2020
by   Tuyen Trung Truong, et al.
0

In this paper, we give explicit descriptions of versions of (Local-) Backtracking Gradient Descent and New Q-Newton's method to the Riemannian setting.Here are some easy to state consequences of results in this paper, where X is a general Riemannian manifold of finite dimension and f:X→ℝ a C^2 function which is Morse (that is, all its critical points are non-degenerate). Theorem. For random choices of the hyperparameters in the Riemanian Local Backtracking Gradient Descent algorithm and for random choices of the initial point x_0, the sequence {x_n} constructed by the algorithm either (i) converges to a local minimum of f or (ii) eventually leaves every compact subsets of X (in other words, diverges to infinity on X). If f has compact sublevels, then only the former alternative happens. The convergence rate is the same as in the classical paper by Armijo. Theorem. Assume that f is C^3. For random choices of the hyperparametes in the Riemannian New Q-Newton's method, if the sequence constructed by the algorithm converges, then the limit is a critical point of f. We have a local Stable-Center manifold theorem, near saddle points of f, for the dynamical system associated to the algorithm. If the limit point is a non-degenerate minimum point, then the rate of convergence is quadratic. If moreover X is an open subset of a Lie group and the initial point x_0 is chosen randomly, then we can globally avoid saddle points. As an application, we propose a general method using Riemannian Backtracking GD to find minimum of a function on a bounded ball in a Euclidean space, and do explicit calculations for calculating the smallest eigenvalue of a symmetric square matrix.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset