Diversifying Message Aggregation in Multi-Agent Communication via Normalized Tensor Nuclear Norm Regularization

by   Yuanzhao Zhai, et al.

Aggregating messages is a key component for the communication of multi-agent reinforcement learning (Comm-MARL). Recently, it has witnessed the prevalence of graph attention networks (GAT) in Comm-MARL, where agents can be represented as nodes and messages can be aggregated via the weighted passing. While successful, GAT can lead to homogeneity in the strategies of message aggregation, and the “core” agent may excessively influence other agents' behaviors, which can severely limit the multi-agent coordination. To address this challenge, we first study the adjacency tensor of the communication graph and demonstrate that the homogeneity of message aggregation could be measured by the normalized tensor rank. Since the rank optimization problem is known to be NP-hard, we define a new nuclear norm, which is a convex surrogate of normalized tensor rank, to replace the rank. Leveraging the norm, we further propose a plug-and-play regularizer on the adjacency tensor, named Normalized Tensor Nuclear Norm Regularization (NTNNR), to actively enrich the diversity of message aggregation during the training stage. We extensively evaluate GAT with the proposed regularizer in both cooperative and mixed cooperative-competitive scenarios. The results demonstrate that aggregating messages using NTNNR-enhanced GAT can improve the efficiency of the training and achieve higher asymptotic performance than existing message aggregation methods. When NTNNR is applied to existing graph-attention Comm-MARL methods, we also observe significant performance improvements on the StarCraft II micromanagement benchmarks.


page 2

page 6

page 7


Efficient Communication via Self-supervised Information Aggregation for Online and Offline Multi-agent Reinforcement Learning

Utilizing messages from teammates can improve coordination in cooperativ...

Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning

In Multi-Agent Reinforcement Learning, communication is critical to enco...

Dynamic Size Message Scheduling for Multi-Agent Communication under Limited Bandwidth

Communication plays a vital role in multi-agent systems, fostering colla...

Robust Multi-agent Communication via Multi-view Message Certification

Many multi-agent scenarios require message sharing among agents to promo...

Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control

Multi-agent reinforcement learning (MARL) has recently received consider...

Low Entropy Communication in Multi-Agent Reinforcement Learning

Communication in multi-agent reinforcement learning has been drawing att...

Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion

Tensor factorization based models have shown great power in knowledge gr...

Please sign up or login with your details

Forgot password? Click here to reset