Efficient RL via Disentangled Environment and Agent Representations

by   Kevin Gmelin, et al.

Agents that are aware of the separation between themselves and their environments can leverage this understanding to form effective representations of visual input. We propose an approach for learning such structured representations for RL algorithms, using visual knowledge of the agent, such as its shape or mask, which is often inexpensive to obtain. This is incorporated into the RL objective using a simple auxiliary loss. We show that our method, Structured Environment-Agent Representations, outperforms state-of-the-art model-free approaches over 18 different challenging visual simulation environments spanning 5 different robots. Website at https://sear-rl.github.io/


Conditional Mutual Information for Disentangled Representations in Reinforcement Learning

Reinforcement Learning (RL) environments can produce training data with ...

Terrain RL Simulator

We provide 89 challenging simulation environments that range in difficul...

Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning

In real-world robotics applications, Reinforcement Learning (RL) agents ...

Shaping Belief States with Generative Environment Models for RL

When agents interact with a complex environment, they must form and main...

Deep Surrogate Assisted Generation of Environments

Recent progress in reinforcement learning (RL) has started producing gen...

Visual Reaction: Learning to Play Catch with Your Drone

In this paper we address the problem of visual reaction: the task of int...

Unsupervised Visual Attention and Invariance for Reinforcement Learning

Vision-based reinforcement learning (RL) is successful, but how to gener...

Please sign up or login with your details

Forgot password? Click here to reset