Collapsing Bandits and Their Application to Public Health Interventions

07/05/2020
by   Aditya Mate, et al.
11

We propose and study Collpasing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus "collapsing" any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve. The goal is to keep as many arms in the "good" state as possible by planning a limited budget of actions per round. Such Collapsing Bandits are natural models for many healthcare domains in which workers must simultaneously monitor patients and deliver interventions in a way that maximizes the health of their patient cohort. Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable. Our derivation hinges on novel conditions that characterize when the optimal policies may take the form of either "forward" or "reverse" threshold policies. (ii) We exploit the optimality of threshold policies to build fast algorithms for computing the Whittle index, including a closed-form. (iii) We evaluate our algorithm on several data distributions including data from a real-world healthcare task in which a worker must monitor and deliver interventions to maximize their patients' adherence to tuberculosis medication. Our algorithm achieves a 3-order-of-magnitude speedup compared to state-of-the-art RMAB techniques while achieving similar performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2023

Indexability of Finite State Restless Multi-Armed Bandit and Rollout Policy

We consider finite state restless multi-armed bandit problem. The decisi...
research
01/28/2022

Networked Restless Multi-Armed Bandits for Mobile Interventions

Motivated by a broad class of mobile intervention problems, we propose a...
research
05/22/2023

Limited Resource Allocation in a Non-Markovian World: The Case of Maternal and Child Healthcare

The success of many healthcare programs depends on participants' adheren...
research
05/17/2021

Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare

In many public health settings, it is important for patients to adhere t...
research
06/14/2021

Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting

Restless and collapsing bandits are commonly used to model constrained r...
research
03/01/2023

Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks

Motivated by applications such as machine repair, project monitoring, an...
research
08/22/2021

Moments in the Production of Space: Developing a Generic Adolescent Girls and Young Women Health Information Systems in Zimbabwe

With global targets to end AIDS by 2030 and to eliminate new HIV infecti...

Please sign up or login with your details

Forgot password? Click here to reset