Learning To Reach Goals Without Reinforcement Learning

by   Dibya Ghosh, et al.

Imitation learning algorithms provide a simple and straightforward approach for training control policies via supervised learning. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator to provide the demonstrations. In this paper, we ask: can we take insights from imitation learning to design algorithms that can effectively acquire optimal policies from scratch without any expert demonstrations? The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks. In particular, when tasks correspond to different goals, every trajectory is a successful demonstration for the goal state that it actually reaches. We propose a simple algorithm for learning goal-reaching behaviors without any demonstrations, complicated user-provided reward functions, or complex reinforcement learning methods. Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached. Although related variants of this approach have been proposed previously in imitation learning with demonstrations, we show how this approach can effectively learn goal-reaching policies from scratch. We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems, while yielding advantages in terms of stability and use of offline data.


Reward-Conditioned Policies

Reinforcement learning offers the promise of automating the acquisition ...

Goal-conditioned Imitation Learning

Designing rewards for Reinforcement Learning (RL) is challenging because...

Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization

Hindsight goal relabeling has become a foundational technique for multi-...

Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning

This work considers two distinct settings: imitation learning and goal-c...

Learning the optimal state-feedback via supervised imitation learning

Imitation learning is a control design paradigm that seeks to learn a co...

State Representation Learning from Demonstration

In a context where several policies can be observed as black boxes on di...

Leveraging Sequentiality in Reinforcement Learning from a Single Demonstration

Deep Reinforcement Learning has been successfully applied to learn robot...

Please sign up or login with your details

Forgot password? Click here to reset