A Primer on Maximum Causal Entropy Inverse Reinforcement Learning

03/22/2022
by   Adam Gleave, et al.
0

Inverse Reinforcement Learning (IRL) algorithms infer a reward function that explains demonstrations provided by an expert acting in the environment. Maximum Causal Entropy (MCE) IRL is currently the most popular formulation of IRL, with numerous extensions. In this tutorial, we present a compressed derivation of MCE IRL and the key results from contemporary implementations of MCE IRL algorithms. We hope this will serve both as an introductory resource for those new to the field, and as a concise reference for those already familiar with these topics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

Hybrid Adversarial Inverse Reinforcement Learning

In this paper, we investigate the problem of the inverse reinforcement l...
research
12/01/2020

Revisiting Maximum Entropy Inverse Reinforcement Learning: New Perspectives and Algorithms

We provide new perspectives and inference algorithms for Maximum Entropy...
research
05/22/2018

Multi-task Maximum Entropy Inverse Reinforcement Learning

Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferr...
research
07/26/2019

Learning Task Specifications from Demonstrations via the Principle of Maximum Causal Entropy

In many settings (e.g., robotics) demonstrations provide a natural way t...
research
11/16/2019

Generalized Maximum Causal Entropy for Inverse Reinforcement Learning

We consider the problem of learning from demonstrated trajectories with ...
research
09/22/2017

Inverse Reinforcement Learning with Conditional Choice Probabilities

We make an important connection to existing results in econometrics to d...
research
03/04/2021

Inverse Reinforcement Learning with Explicit Policy Estimates

Various methods for solving the inverse reinforcement learning (IRL) pro...

Please sign up or login with your details

Forgot password? Click here to reset