Regularized asymptotic descents for a class of nonconvex optimization problems
We propose and analyze regularized asymptotic descent (RAD) methods for finding the global minimizer of a class of possibly nonconvex, nonsmooth, or even discontinuous functions. Such functions are extended from strongly convex functions with Lipschitz-continuous gradients. We establish an explicit representation for the solution of regularized minimization, so that the method can find the global minimizer without being trapped in saddle points, local minima, or discontinuities. The main theoretical result shows that the method enjoys the global linear convergence with high probability for such functions. Besides, the method is derivative-free and its per-iteration cost, i.e., the number of function evaluations, is bounded, so it has a complexity bound O(log1/ϵ) for finding a point such that the optimality gap at this point is less than ϵ>0. Numerical experiments in up to 500 dimensions demonstrate the benefits of the method.
READ FULL TEXT