APS: Active Pretraining with Successor Features

08/31/2021
by   Hao Liu, et al.
0

We introduce a new unsupervised pretraining objective for reinforcement learning. During the unsupervised reward-free pretraining phase, the agent maximizes mutual information between tasks and states induced by the policy. Our key contribution is a novel lower bound of this intractable quantity. We show that by reinterpreting and combining variational successor features <cit.> with nonparametric entropy maximization <cit.>, the intractable mutual information can be efficiently optimized. The proposed method Active Pretraining with Successor Feature (APS) explores the environment via nonparametric entropy maximization, and the explored data can be efficiently leveraged to learn behavior by variational successor features. APS addresses the limitations of existing mutual information maximization based and entropy maximization based unsupervised RL, and combines the best of both worlds. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning.

READ FULL TEXT
research
02/01/2022

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery

We introduce Contrastive Intrinsic Control (CIC), an algorithm for unsup...
research
03/01/2023

A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement Learning

In this paper, we propose a new mutual information framework for multi-a...
research
10/14/2022

Mutual Information Regularized Offline Reinforcement Learning

Offline reinforcement learning (RL) aims at learning an effective policy...
research
11/10/2018

Formal Limitations on the Measurement of Mutual Information

Motivate by applications to unsupervised learning, we consider the probl...
research
04/08/2020

Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information

We propose learning discrete structured representations from unlabeled d...
research
06/09/2021

Pretraining Representations for Data-Efficient Reinforcement Learning

Data efficiency is a key challenge for deep reinforcement learning. We a...
research
01/30/2023

Quantifying and maximizing the information flux in recurrent neural networks

Free-running Recurrent Neural Networks (RNNs), especially probabilistic ...

Please sign up or login with your details

Forgot password? Click here to reset