Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs

06/24/2022
by   Yifan Lin, et al.
0

In this paper we consider the contextual multi-armed bandit problem for linear payoffs under a risk-averse criterion. At each round, contexts are revealed for each arm, and the decision maker chooses one arm to pull and receives the corresponding reward. In particular, we consider mean-variance as the risk criterion, and the best arm is the one with the largest mean-variance reward. We apply the Thompson Sampling algorithm for the disjoint model, and provide a comprehensive regret analysis for a variant of the proposed algorithm. For T rounds, K actions, and d-dimensional feature vectors, we prove a regret bound of O((1+ρ+1/ρ) dln T lnK/δ√(d K T^1+2ϵlnK/δ1/ϵ)) that holds with probability 1-δ under the mean-variance criterion with risk tolerance ρ, for any 0<ϵ<1/2, 0<δ<1. The empirical performance of our proposed algorithms is demonstrated via a portfolio selection problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset