Online Learning with Primary and Secondary Losses

10/27/2020
by   Avrim Blum, et al.
7

We study the problem of online learning with primary and secondary losses. For example, a recruiter making decisions of which job applicants to hire might weigh false positives and false negatives equally (the primary loss) but the applicants might weigh false negatives much higher (the secondary loss). We consider the following question: Can we combine "expert advice" to achieve low regret with respect to the primary loss, while at the same time performing not much worse than the worst expert with respect to the secondary loss? Unfortunately, we show that this goal is unachievable without any bounded variance assumption on the secondary loss. More generally, we consider the goal of minimizing the regret with respect to the primary loss and bounding the secondary loss by a linear threshold. On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by o(T) for any time interval. If not all experts satisfy this assumption, our algorithms can achieve this goal given access to some external oracles which determine when to deactivate and reactivate experts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2022

A Regret-Variance Trade-Off in Online Learning

We consider prediction with expert advice for strongly convex and bounde...
research
11/11/2019

Learning The Best Expert Efficiently

We consider online learning problems where the aim is to achieve regret ...
research
10/15/2021

k – Online Policies and Fundamental Limits

This paper introduces and studies the k problem – a generalization of th...
research
04/29/2013

Optimal amortized regret in every interval

Consider the classical problem of predicting the next bit in a sequence ...
research
03/03/2023

Near Optimal Memory-Regret Tradeoff for Online Learning

In the experts problem, on each of T days, an agent needs to follow the ...
research
01/08/2019

Soft-Bayes: Prod for Mixtures of Experts with Log-Loss

We consider prediction with expert advice under the log-loss with the go...
research
02/20/2018

Generalized Mixability Constant Regret, Generalized Mixability, and Mirror Descent

We consider the setting of prediction with expert advice; a learner make...

Please sign up or login with your details

Forgot password? Click here to reset