Multi-Scale Zero-Order Optimization of Smooth Functions in an RKHS

05/11/2020
by   Shubhanshu Shekhar, et al.
0

We aim to optimize a black-box function f:XR under the assumption that f is Hölder smooth and has bounded norm in the RKHS associated with a given kernel K. This problem is known to have an agnostic Gaussian Process (GP) bandit interpretation in which an appropriately constructed GP surrogate model with kernel K is used to obtain an upper confidence bound (UCB) algorithm. In this paper, we propose a new algorithm (LP-GP-UCB) where the usual GP surrogate model is augmented with Local Polynomial (LP) estimators of the Hölder smooth function f to construct a multi-scale UCB guiding the search for the optimizer. We analyze this algorithm and derive high probability bounds on its simple and cumulative regret. We then prove that the elements of many common RKHS are Hölder smooth and obtain the corresponding Hölder smoothness parameters, and hence, specialize our regret bounds for several commonly used kernels. When specialized to the Squared Exponential (SE) kernel, LP-GP-UCB matches the optimal performance, while for the case of Matérn kernels (K_ν)_ν>0, it results in uniformly tighter regret bounds for all values of the smoothness parameter ν>0. Most notably, for certain ranges of ν, the algorithm achieves near-optimal bounds on simple and cumulative regrets, matching the algorithm-independent lower bounds up to polylog factors, and thus closing the large gap between the existing upper and lower bounds for these values of ν. Additionally, our analysis provides the first explicit regret bounds, in terms of the budget n, for the Rational-Quadratic (RQ) and Gamma-Exponential (GE). Finally, experiments with synthetic functions as well as a CNN hyperparameter tuning task demonstrate the practical benefits of our multi-scale partitioning approach over some existing algorithms numerically.

READ FULL TEXT
research
09/15/2020

On Information Gain and Regret Bounds in Gaussian Process Bandits

Consider the sequential optimization of an expensive to evaluate and pos...
research
05/31/2017

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

In this paper, we consider the problem of sequentially optimizing a blac...
research
03/12/2022

Instance-Dependent Regret Analysis of Kernelized Bandits

We study the kernelized bandit problem, that involves designing an adapt...
research
09/20/2022

Lower Bounds on the Worst-Case Complexity of Efficient Global Optimization

Efficient global optimization is a widely used method for optimizing exp...
research
09/16/2019

Bayesian Optimization under Heavy-tailed Payoffs

We consider black box optimization of an unknown function in the nonpara...
research
02/03/2023

Randomized Gaussian Process Upper Confidence Bound with Tight Bayesian Regret Bounds

Gaussian process upper confidence bound (GP-UCB) is a theoretically prom...
research
10/01/2018

A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

We study the problem of optimizing a function under a budgeted number of...

Please sign up or login with your details

Forgot password? Click here to reset