Efficient Contextual Bandits in Non-stationary Worlds

08/05/2017
by   Haipeng Luo, et al.
0

Most contextual bandit algorithms minimize regret to the best fixed policy--a questionable benchmark for non-stationary environments ubiquitous in applications. In this work, we obtain efficient contextual bandit algorithms with strong guarantees for alternate notions of regret suited to these non-stationary environments. Two of our algorithms equip existing methods for i.i.d problems with sophisticated statistical tests, dynamically adapting to a change in distribution. The third approach uses a recent technique for combining multiple bandit algorithms, with each copy starting at a different round so as to learn over different data segments. We analyze several notions of regret for these methods, including the first results on dynamic regret for efficient contextual bandit algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset