On Distributed Cooperative Decision-Making in Multiarmed Bandits

12/21/2015
by   Peter Landgren, et al.
0

We study the explore-exploit tradeoff in distributed cooperative decision-making using the context of the multiarmed bandit (MAB) problem. For the distributed cooperative MAB problem, we design the cooperative UCB algorithm that comprises two interleaved distributed processes: (i) running consensus algorithms for estimation of rewards, and (ii) upper-confidence-bound-based heuristics for selection of arms. We rigorously analyze the performance of the cooperative UCB algorithm and characterize the influence of communication graph structure on the decision-making performance of the group.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset