Exploration with Unreliable Intrinsic Reward in Multi-Agent Reinforcement Learning

06/05/2019
by   Wendelin Böhmer, et al.
14

This paper investigates the use of intrinsic reward to guide exploration in multi-agent reinforcement learning. We discuss the challenges in applying intrinsic reward to multiple collaborative agents and demonstrate how unreliable reward can prevent decentralized agents from learning the optimal policy. We address this problem with a novel framework, Independent Centrally-assisted Q-learning (ICQL), in which decentralized agents share control and an experience replay buffer with a centralized agent. Only the centralized agent is intrinsically rewarded, but the decentralized agents still benefit from improved exploration, without the distraction of unreliable incentives.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset