On the Interplay Between Misspecification and Sub-optimality Gap in Linear Contextual Bandits

03/16/2023
by   Weitong Zhang, et al.
1

We study linear contextual bandits in the misspecified setting, where the expected reward function can be approximated by a linear function class up to a bounded misspecification level ζ>0. We propose an algorithm based on a novel data selection scheme, which only selects the contextual vectors with large uncertainty for online regression. We show that, when the misspecification level ζ is dominated by Õ (Δ / √(d)) with Δ being the minimal sub-optimality gap and d being the dimension of the contextual vectors, our algorithm enjoys the same gap-dependent regret bound Õ (d^2/Δ) as in the well-specified setting up to logarithmic factors. In addition, we show that an existing algorithm SupLinUCB (Chu et al., 2011) can also achieve a gap-dependent constant regret bound without the knowledge of sub-optimality gap Δ. Together with a lower bound adapted from Lattimore et al. (2020), our result suggests an interplay between misspecification level and the sub-optimality gap: (1) the linear contextual bandit model is efficiently learnable when ζ≤Õ(Δ / √(d)); and (2) it is not efficiently learnable when ζ≥Ω̃(Δ / √(d)). Experiments on both synthetic and real-world datasets corroborate our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset