Exponential Weights Algorithms for Selective Learning

06/29/2021
by   Mingda Qiao, et al.
0

We study the selective learning problem introduced by Qiao and Valiant (2019), in which the learner observes n labeled data points one at a time. At a time of its choosing, the learner selects a window length w and a model ℓ̂ from the model class ℒ, and then labels the next w data points using ℓ̂. The excess risk incurred by the learner is defined as the difference between the average loss of ℓ̂ over those w data points and the smallest possible average loss among all models in ℒ over those w data points. We give an improved algorithm, termed the hybrid exponential weights algorithm, that achieves an expected excess risk of O((loglog|ℒ| + loglog n)/log n). This result gives a doubly exponential improvement in the dependence on |ℒ| over the best known bound of O(√(|ℒ|/log n)). We complement the positive result with an almost matching lower bound, which suggests the worst-case optimality of the algorithm. We also study a more restrictive family of learning algorithms that are bounded-recall in the sense that when a prediction window of length w is chosen, the learner's decision only depends on the most recent w data points. We analyze an exponential weights variant of the ERM algorithm in Qiao and Valiant (2019). This new algorithm achieves an expected excess risk of O(√(log |ℒ|/log n)), which is shown to be nearly optimal among all bounded-recall learners. Our analysis builds on a generalized version of the selective mean prediction problem in Drucker (2013); Qiao and Valiant (2019), which may be of independent interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro