Multi-Agent Reinforcement Learning with Graph Clustering

08/20/2020
by   Tianze Zhou, et al.
0

In this paper, we introduce the group concept into multi-agent reinforcement learning. In this method, agents are divided into several groups and each group completes a specific subtask so that agents can cooperate to complete the main task. Existing methods use the communication vector to exchange information between agents. This may encounter communication redundancy. To solve this problem, we propose a MARL method based on graph clustering. It allows agents to adaptively learn group features and replaces the communication operation. In our method, agent features are divide into two types, including in-group features and individual features. They represent the generality and differences between agents, respectively. Based on the graph attention network(GAT), we introduce the graph clustering method as a punishment to optimize agent group feature. Then these features are used to generate individual Q value. To overcome the consistent problem brought by GAT, we introduce the split loss to distinguish agent features. Our method is easy to convert into the CTDE framework via using Kullback-Leibler divergence method. Empirical results are evaluated on a challenging set of StarCraft II micromanagement tasks. The result shows that our method outperforms existing multi-agent reinforcement learning methods and the performance increases with the number of agents increasing.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset