No Regret Bound for Extreme Bandits

by   Robert Nishihara, et al.

Algorithms for hyperparameter optimization abound, all of which work well under different and often unverifiable assumptions. Motivated by the general challenge of sequentially choosing which algorithm to use, we study the more specific task of choosing among distributions to use for random hyperparameter optimization. This work is naturally framed in the extreme bandit setting, which deals with sequentially choosing which distribution from a collection to sample in order to minimize (maximize) the single best cost (reward). Whereas the distributions in the standard bandit setting are primarily characterized by their means, a number of subtleties arise when we care about the minimal cost as opposed to the average cost. For example, there may not be a well-defined "best" distribution as there is in the standard bandit setting. The best distribution depends on the rewards that have been obtained and on the remaining time horizon. Whereas in the standard bandit setting, it is sensible to compare policies with an oracle which plays the single best arm, in the extreme bandit setting, there are multiple sensible oracle models. We define a sensible notion of "extreme regret" in the extreme bandit setting, which parallels the concept of regret in the standard bandit setting. We then prove that no policy can asymptotically achieve no extreme regret.


page 1

page 2

page 3

page 4


Corralling Stochastic Bandit Algorithms

We study the problem of corralling stochastic bandit algorithms, that is...

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits

We consider the problem of learning in single-player and multiplayer mul...

Extreme Bandits using Robust Statistics

We consider a multi-armed bandit problem motivated by situations where o...

Optimistic Policy Optimization with Bandit Feedback

Policy optimization methods are one of the most widely used classes of R...

Non-stochastic Best Arm Identification and Hyperparameter Optimization

Motivated by the task of hyperparameter optimization, we introduce the n...

Trading Off Resource Budgets for Improved Regret Bounds

In this work we consider a variant of adversarial online learning where ...

Bandit Learning Through Biased Maximum Likelihood Estimation

We propose BMLE, a new family of bandit algorithms, that are formulated ...

Please sign up or login with your details

Forgot password? Click here to reset