Convex Optimization with Nonconvex Oracles
In machine learning and optimization, one often wants to minimize a convex objective function F but can only evaluate a noisy approximation F̂ to it. Even though F is convex, the noise may render F̂ nonconvex, making the task of minimizing F intractable in general. As a consequence, several works in theoretical computer science, machine learning and optimization have focused on coming up with polynomial time algorithms to minimize F under conditions on the noise F(x)-F̂(x) such as its uniform-boundedness, or on F such as strong convexity. However, in many applications of interest, these conditions do not hold. Here we show that, if the noise has magnitude α F(x) + β for some α, β > 0, then there is a polynomial time algorithm to find an approximate minimizer of F. In particular, our result allows for unbounded noise and generalizes those of Applegate and Kannan, and Zhang, Liang and Charikar, who proved similar results for the bounded noise case, and that of Belloni et al. who assume that the noise grows in a very specific manner and that F is strongly convex. Turning our result on its head, one may also view our algorithm as minimizing a nonconvex function F̂ that is promised to be related to a convex function F as above. Our algorithm is a "simulated annealing" modification of the stochastic gradient Langevin Markov chain and gradually decreases the temperature of the chain to approach the global minimizer. Analyzing such an algorithm for the unbounded noise model and a general convex function turns out to be challenging and requires several technical ideas that might be of independent interest in deriving non-asymptotic bounds for other simulated annealing based algorithms.
READ FULL TEXT