Multi-Agent Generative Adversarial Imitation Learning

07/26/2018
by   Jiaming Song, et al.
0

Imitation learning algorithms can be used to learn a policy from expert demonstrations without access to a reward signal. However, most existing approaches are not applicable in multi-agent settings due to the existence of multiple (Nash) equilibria and non-stationary environments. We propose a new framework for multi-agent imitation learning for general Markov games, where we build upon a generalized notion of inverse reinforcement learning. We further introduce a practical multi-agent actor-critic algorithm with good empirical performance. Our method can be used to imitate complex behaviors in high-dimensional environments with multiple cooperative or competing agents.

READ FULL TEXT
research
07/30/2019

Multi-Agent Adversarial Inverse Reinforcement Learning

Reinforcement learning agents are prone to undesired behaviors due to re...
research
09/25/2019

Independent Generative Adversarial Self-Imitation Learning in Cooperative Multiagent Systems

Many tasks in practice require the collaboration of multiple agents thro...
research
01/05/2022

Conditional Imitation Learning for Multi-Agent Games

While advances in multi-agent learning have enabled the training of incr...
research
03/25/2023

Embedding Contextual Information through Reward Shaping in Multi-Agent Learning: A Case Study from Google Football

Artificial Intelligence has been used to help human complete difficult t...
research
06/20/2018

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

We address the problem of spatial segmentation of a 2D object in the con...
research
07/10/2021

Multi-Agent Imitation Learning with Copulas

Multi-agent imitation learning aims to train multiple agents to perform ...
research
11/12/2019

Accelerating Training in Pommerman with Imitation and Reinforcement Learning

The Pommerman simulation was recently developed to mimic the classic Jap...

Please sign up or login with your details

Forgot password? Click here to reset