Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise

05/19/2022
by   Yoshinari Motokawa, et al.
0

In multi-agent systems, noise reduction techniques are important for improving the overall system reliability as agents are required to rely on limited environmental information to develop cooperative and coordinated behaviors with the surrounding agents. However, previous studies have often applied centralized noise reduction methods to build robust and versatile coordination in noisy multi-agent environments, while distributed and decentralized autonomous agents are more plausible for real-world application. In this paper, we introduce a distributed attentional actor architecture model for a multi-agent system (DA3-X), using which we demonstrate that agents with DA3-X can selectively learn the noisy environment and behave cooperatively. We experimentally evaluate the effectiveness of DA3-X by comparing learning methods with and without DA3-X and show that agents with DA3-X can achieve better performance than baseline agents. Furthermore, we visualize heatmaps of attentional weights from the DA3-X to analyze how the decision-making process and coordinated behavior are influenced by noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset