Collaborative Learning and Personalization in Multi-Agent Stochastic Linear Bandits

06/15/2021
by   Avishek Ghosh, et al.
0

We consider the problem of minimizing regret in an N agent heterogeneous stochastic linear bandits framework, where the agents (users) are similar but not all identical. We model user heterogeneity using two popularly used ideas in practice; (i) A clustering framework where users are partitioned into groups with users in the same group being identical to each other, but different across groups, and (ii) a personalization framework where no two users are necessarily identical, but a user's parameters are close to that of the population average. In the clustered users' setup, we propose a novel algorithm, based on successive refinement of cluster identities and regret minimization. We show that, for any agent, the regret scales as 𝒪(√(T/N)), if the agent is in a `well separated' cluster, or scales as 𝒪(T^1/2 + ε/(N)^1/2 -ε) if its cluster is not well separated, where ε is positive and arbitrarily close to 0. Our algorithm is adaptive to the cluster separation, and is parameter free – it does not need to know the number of clusters, separation and cluster size, yet the regret guarantee adapts to the inherent complexity. In the personalization framework, we introduce a natural algorithm where, the personal bandit instances are initialized with the estimates of the global average model. We show that, an agent i whose parameter deviates from the population average by ϵ_i, attains a regret scaling of O(ϵ_i√(T)). This demonstrates that if the user representations are close (small ϵ_i), the resulting regret is low, and vice-versa. The results are empirically validated and we observe superior performance of our adaptive algorithms over non-adaptive baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset