Dyna-AIL : Adversarial Imitation Learning by Planning

03/08/2019
by   Vaibhav Saxena, et al.
0

Adversarial methods for imitation learning have been shown to perform well on various control tasks. However, they require a large number of environment interactions for convergence. In this paper, we propose an end-to-end differentiable adversarial imitation learning algorithm in a Dyna-like framework for switching between model-based planning and model-free learning from expert data. Our results on both discrete and continuous environments show that our approach of using model-based planning along with model-free learning converges to an optimal policy with fewer number of environment interactions in comparison to the state-of-the-art learning methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset