Smooth Bandit Optimization: Generalization to Hölder Space
We consider bandit optimization of a smooth reward function, where the goal is cumulative regret minimization. This problem has been studied for α-Hölder continuous (including Lipschitz) functions with 0<α≤ 1. Our main result is in generalization of the reward function to Hölder space with exponent α>1 to bridge the gap between Lipschitz bandits and infinitely-differentiable models such as linear bandits. For Hölder continuous functions, approaches based on random sampling in bins of a discretized domain suffices as optimal. In contrast, we propose a class of two-layer algorithms that deploy misspecified linear/polynomial bandit algorithms in bins. We demonstrate that the proposed algorithm can exploit higher-order smoothness of the function by deriving a regret upper bound of Õ(T^d+α/d+2α) for when α>1, which matches existing lower bound. We also study adaptation to unknown function smoothness over a continuous scale of Hölder spaces indexed by α, with a bandit model selection approach applied with our proposed two-layer algorithms. We show that it achieves regret rate that matches the existing lower bound for adaptation within the α≤ 1 subset.
READ FULL TEXT