Bandits with Temporal Stochastic Constraints

11/22/2018
by   Priyank Agrawal, et al.
0

We study the effect of impairment on stochastic multi-armed bandits and develop new ways to mitigate it. Impairment effect is the phenomena where an agent only accrues reward for an action if they have played it at least a few times in the recent past. It is practically motivated by repetition and recency effects in domains such as advertising (here consumer behavior may require repeat actions by advertisers) and vocational training (here actions are complex skills that can only be mastered with repetition to get a payoff). Impairment can be naturally modelled as a temporal constraint on the strategy space, and we provide two novel algorithms that achieve sublinear regret, each working with different assumptions on the impairment effect. We introduce a new notion called bucketing in our algorithm design, and show how it can effectively address impairment as well as a broader class of temporal constraints. Our regret bounds explicitly capture the cost of impairment and show that it scales (sub-)linearly with the degree of impairment. Our work complements recent work on modeling delays and corruptions, and we provide experimental evidence supporting our claims.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2020

Learning by Repetition: Stochastic Multi-armed Bandits under Priming Effect

We study the effect of persistence of engagement on learning in a stocha...
research
05/30/2023

Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits

The study of collaborative multi-agent bandits has attracted significant...
research
02/22/2019

Better Algorithms for Stochastic Bandits with Adversarial Corruptions

We study the stochastic multi-armed bandits problem in the presence of a...
research
06/08/2022

Uplifting Bandits

We introduce a multi-armed bandit model where the reward is a sum of mul...
research
10/23/2020

Finite Continuum-Armed Bandits

We consider a situation where an agent has T ressources to be allocated ...
research
12/13/2021

Safe Linear Leveling Bandits

Multi-armed bandits (MAB) are extensively studied in various settings wh...
research
06/01/2023

Last Switch Dependent Bandits with Monotone Payoff Functions

In a recent work, Laforgue et al. introduce the model of last switch dep...

Please sign up or login with your details

Forgot password? Click here to reset