Meta-Reinforcement Learning of Structured Exploration Strategies

by   Abhishek Gupta, et al.

Exploration is a fundamental challenge in reinforcement learning (RL). Many of the current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we explore how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm -- model agnostic exploration with structured noise (MAESN) -- to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation.


page 1

page 2

page 3

page 4


Learn to Effectively Explore in Context-Based Meta-RL

Meta reinforcement learning (meta-RL) provides a principled approach for...

DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated and Musculoskeletal Systems

Muscle-actuated organisms are capable of learning an unparalleled divers...

Non-local Policy Optimization via Diversity-regularized Collaborative Exploration

Conventional Reinforcement Learning (RL) algorithms usually have one sin...

MAME : Model-Agnostic Meta-Exploration

Meta-Reinforcement learning approaches aim to develop learning procedure...

Learning to Explore in Motion and Interaction Tasks

Model free reinforcement learning suffers from the high sampling complex...

Efficient Exploration via State Marginal Matching

To solve tasks with sparse rewards, reinforcement learning algorithms mu...

Hashing Over Predicted Future Frames for Informed Exploration of Deep Reinforcement Learning

In reinforcement learning (RL) tasks, an efficient exploration mechanism...

Please sign up or login with your details

Forgot password? Click here to reset