Causal Bandits: Learning Good Interventions via Causal Inference

06/10/2016
by   Finnian Lattimore, et al.
0

We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset