A Field Test of Bandit Algorithms for Recommendations: Understanding the Validity of Assumptions on Human Preferences in Multi-armed Bandits

04/16/2023
by   Liu Leqi, et al.
0

Personalized recommender systems suffuse modern life, shaping what media we read and what products we consume. Algorithms powering such systems tend to consist of supervised learning-based heuristics, such as latent factor models with a variety of heuristically chosen prediction targets. Meanwhile, theoretical treatments of recommendation frequently address the decision-theoretic nature of the problem, including the need to balance exploration and exploitation, via the multi-armed bandits (MABs) framework. However, MAB-based approaches rely heavily on assumptions about human preferences. These preference assumptions are seldom tested using human subject studies, partly due to the lack of publicly available toolkits to conduct such studies. In this work, we conduct a study with crowdworkers in a comics recommendation MABs setting. Each arm represents a comic category, and users provide feedback after each recommendation. We check the validity of core MABs assumptions-that human preferences (reward distributions) are fixed over time-and find that they do not hold. This finding suggests that any MAB algorithm used for recommender systems should account for human preference dynamics. While answering these questions, we provide a flexible experimental framework for understanding human preference dynamics and testing MABs algorithms with human users. The code for our experimental framework and the collected data can be found at https://github.com/HumainLab/human-bandit-evaluation.

READ FULL TEXT

page 4

page 15

page 16

research
04/29/2021

Online certification of preference-based fairness for personalized recommender systems

We propose to assess the fairness of personalized recommender systems in...
research
07/01/2021

The Use of Bandit Algorithms in Intelligent Interactive Recommender Systems

In today's business marketplace, many high-tech Internet enterprises con...
research
05/02/2016

Graph Clustering Bandits for Recommendation

We investigate an efficient context-dependent clustering technique for r...
research
07/30/2018

Preference-based Online Learning with Dueling Bandits: A Survey

In machine learning, the notion of multi-armed bandits refers to a class...
research
10/23/2021

Towards the D-Optimal Online Experiment Design for Recommender Selection

Selecting the optimal recommender via online exploration-exploitation is...
research
04/21/2022

Human Preferences as Dueling Bandits

The dramatic improvements in core information retrieval tasks engendered...
research
09/13/2020

Spoiled for Choice? Personalized Recommendation for Healthcare Decisions: A Multi-Armed Bandit Approach

Online healthcare communities provide users with various healthcare inte...

Please sign up or login with your details

Forgot password? Click here to reset