Regularized asymptotic descents for nonconvex optimization

04/05/2020
by   Xiaopeng Luo, et al.
0

In this paper we propose regularized asymptotic descent (RAD) methods for solving nonconvex optimization problems. Our motivation is first to apply the regularized iteration and then to use an explicit asymptotic formula to approximate the solution of each regularized minimization. We consider a class of possibly nonconvex, nonsmooth, or even discontinuous objectives extended from strongly convex functions with Lipschitz-continuous gradients, in each of which has a unique global minima and is continuously differentiable at the global minimizer. The main theoretical result shows that the RAD method enjoys the global linear convergence with high probability for such a class of nonconvex objectives, i.e., the method will not be trapped in saddle points, local minima, or even discontinuities. Besides, the method is derivative-free and its per-iteration cost, i.e., the number of function evaluations, is bounded, so that it has a complexity bound O(log1/ϵ) for finding a point such that the optimality gap at this point is less than ϵ>0.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset