The Laplacian in RL: Learning Representations with Efficient Approximations

10/10/2018
by   Yifan Wu, et al.
18

The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2023

Deep Laplacian-based Options for Temporally-Extended Exploration

Selecting exploratory actions that generate a rich stream of experience ...
research
07/12/2021

Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing

The Laplacian representation recently gains increasing attention for rei...
research
05/22/2023

Policy Representation via Diffusion Probability Model for Reinforcement Learning

Popular reinforcement learning (RL) algorithms tend to produce a unimoda...
research
09/28/2021

Exploratory State Representation Learning

Not having access to compact and meaningful representations is known to ...
research
05/31/2022

Graph Backup: Data Efficient Backup Exploiting Markovian Transitions

The successes of deep Reinforcement Learning (RL) are limited to setting...
research
10/24/2022

Reachability-Aware Laplacian Representation in Reinforcement Learning

In Reinforcement Learning (RL), Laplacian Representation (LapRep) is a t...
research
12/27/2021

A Graph Attention Learning Approach to Antenna Tilt Optimization

6G will move mobile networks towards increasing levels of complexity. To...

Please sign up or login with your details

Forgot password? Click here to reset