Breaking the √(T) Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits
We prove an instance independent (poly) logarithmic regret for stochastic contextual bandits with linear payoff. Previously, in <cit.>, a lower bound of 𝒪(√(T)) is shown for the contextual linear bandit problem with arbitrary (adversarily chosen) contexts. In this paper, we show that stochastic contexts indeed help to reduce the regret from √(T) to (T). We propose Low Regret Stochastic Contextual Bandits (), which takes advantage of the stochastic contexts and performs parameter estimation (in ℓ_2 norm) and regret minimization simultaneously. works in epochs, where the parameter estimation of the previous epoch is used to reduce the regret of the current epoch. The (poly) logarithmic regret of stems from two crucial facts: (a) the application of a norm adaptive algorithm to exploit the parameter estimation and (b) an analysis of the shifted linear contextual bandit algorithm, showing that shifting results in increasing regret. We have also shown experimentally that stochastic contexts indeed incurs a regret that scales with (T).
READ FULL TEXT