Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness

12/02/2021
by   Ankit Garg, et al.
4

We study the complexity of optimizing highly smooth convex functions. For a positive integer p, we want to find an ϵ-approximate minimum of a convex function f, given oracle access to the function and its first p derivatives, assuming that the pth derivative of f is Lipschitz. Recently, three independent research groups (Jiang et al., PLMR 2019; Gasnikov et al., PLMR 2019; Bubeck et al., PLMR 2019) developed a new algorithm that solves this problem with Õ(1/ϵ^2/3p+1) oracle calls for constant p. This is known to be optimal (up to log factors) for deterministic algorithms, but known lower bounds for randomized algorithms do not match this bound. We prove a new lower bound that matches this bound (up to log factors), and holds not only for randomized algorithms, but also for quantum algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset