Greedy-based Value Representation for Optimal Coordination in Multi-agent Reinforcement Learning
Due to the representation limitation of the joint Q value function, multi-agent reinforcement learning (MARL) methods with linear or monotonic value decomposition suffer from the relative overgeneralization. As a result, they can not ensure the optimal coordination. Existing methods address the relative overgeneralization by achieving complete expressiveness or learning a bias, which is insufficient to solve the problem. In this paper, we propose the optimal consistency, a criterion to evaluate the optimality of coordination. To achieve the optimal consistency, we introduce the True-Global-Max (TGM) principle for linear and monotonic value decomposition, where the TGM principle can be ensured when the optimal stable point is the unique stable point. Therefore, we propose the greedy-based value representation (GVR) to ensure the optimal stable point via inferior target shaping and eliminate the non-optimal stable points via superior experience replay. Theoretical proofs and empirical results demonstrate that our method can ensure the optimal consistency under sufficient exploration. In experiments on various benchmarks, GVR significantly outperforms state-of-the-art baselines.
READ FULL TEXT