Contextual Bandits for adapting to changing User preferences over time

09/21/2020
by   Dattaraj Rao, et al.
0

Contextual bandits provide an effective way to model the dynamic data problem in ML by leveraging online (incremental) learning to continuously adjust the predictions based on changing environment. We explore details on contextual bandits, an extension to the traditional reinforcement learning (RL) problem and build a novel algorithm to solve this problem using an array of action-based learners. We apply this approach to model an article recommendation system using an array of stochastic gradient descent (SGD) learners to make predictions on rewards based on actions taken. We then extend the approach to a publicly available MovieLens dataset and explore the findings. First, we make available a simplified simulated dataset showing varying user preferences over time and how this can be evaluated with static and dynamic learning algorithms. This dataset made available as part of this research is intentionally simulated with limited number of features and can be used to evaluate different problem-solving strategies. We will build a classifier using static dataset and evaluate its performance on this dataset. We show limitations of static learner due to fixed context at a point of time and how changing that context brings down the accuracy. Next we develop a novel algorithm for solving the contextual bandit problem. Similar to the linear bandits, this algorithm maps the reward as a function of context vector but uses an array of learners to capture variation between actions/arms. We develop a bandit algorithm using an array of stochastic gradient descent (SGD) learners, with separate learner per arm. Finally, we will apply this contextual bandit algorithm to predicting movie ratings over time by different users from the standard Movie Lens dataset and demonstrate the results.

READ FULL TEXT

page 3

page 9

research
06/07/2020

An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling

We consider the contextual bandit problem, where a player sequentially m...
research
09/28/2018

Differentially Private Contextual Linear Bandits

We study the contextual linear bandit problem, a version of the standard...
research
08/21/2013

Distributed Online Learning via Cooperative Contextual Bandits

In this paper we propose a novel framework for decentralized, online lea...
research
06/07/2021

Generalized Linear Bandits with Local Differential Privacy

Contextual bandit algorithms are useful in personalized online decision-...
research
10/11/2019

Privacy-Preserving Contextual Bandits

Contextual bandits are online learners that, given an input, select an a...
research
02/04/2022

Tsetlin Machine for Solving Contextual Bandit Problems

This paper introduces an interpretable contextual bandit algorithm using...
research
06/06/2019

Stochastic Bandits with Context Distributions

We introduce a novel stochastic contextual bandit model, where at each s...

Please sign up or login with your details

Forgot password? Click here to reset