Restless dependent bandits with fading memory
We study the stochastic multi-armed bandit problem in the case when the arm samples are dependent over time and generated from so-called weak -mixing processes. We establish a -Mix Improved UCB agorithm and provide both problem-dependent and independent regret analysis in two different scenarios. In the first, so-called fast-mixing scenario, we show that pseudo-regret enjoys the same upper bound (up to a factor) as for i.i.d. observations; whereas in the second, slow mixing scenario, we discover a surprising effect, that the regret upper bound is similar to the independent case, with an incremental additive term which does not depend on the number of arms. The analysis of slow mixing scenario is supported with a minmax lower bound, which (up to a (T) factor) matches the obtained upper bound.
READ FULL TEXT