Dynamic probabilistic logic models for effective abstractions in RL

10/15/2021
by   Harsha Kokel, et al.
13

State abstraction enables sample-efficient learning and better task transfer in complex reinforcement learning environments. Recently, we proposed RePReL (Kokel et al. 2021), a hierarchical framework that leverages a relational planner to provide useful state abstractions for learning. We present a brief overview of this framework and the use of a dynamic probabilistic logic model to design these state abstractions. Our experiments show that RePReL not only achieves better performance and efficient learning on the task at hand but also demonstrates better generalization to unseen tasks.

READ FULL TEXT

page 1

page 2

page 3

research
02/08/2020

Learning State Abstractions for Transfer in Continuous Control

Can simple algorithms with a good representation solve challenging reinf...
research
10/04/2022

Learning Dynamic Abstract Representations for Sample-Efficient Reinforcement Learning

In many real-world problems, the learning agent needs to learn a problem...
research
01/15/2017

Near Optimal Behavior via Approximate State Abstraction

The combinatorial explosion that plagues planning and reinforcement lear...
research
10/04/2018

Abstracting Probabilistic Relational Models

Abstraction is a powerful idea widely used in science, to model, reason ...
research
10/13/2022

A Direct Approximation of AIXI Using Logical State Abstractions

We propose a practical integration of logical state abstraction with AIX...
research
02/19/2021

Model-Invariant State Abstractions for Model-Based Reinforcement Learning

Accuracy and generalization of dynamics models is key to the success of ...

Please sign up or login with your details

Forgot password? Click here to reset