Planning with Abstract Learned Models While Learning Transferable Subtasks

12/16/2019
by   John Winder, et al.
0

We introduce an algorithm for model-based hierarchical reinforcement learning to acquire self-contained transition and reward models suitable for probabilistic planning at multiple levels of abstraction. We call this framework Planning with Abstract Learned Models (PALM). By representing subtasks symbolically using a new formal structure, the lifted abstract Markov decision process (L-AMDP), PALM learns models that are independent and modular. Through our experiments, we show how PALM integrates planning and execution, facilitating a rapid and efficient learning of abstract, hierarchical models. We also demonstrate the increased potential for learned models to be transferred to new and related tasks.

READ FULL TEXT
research
07/12/2020

Learning Abstract Models for Strategic Exploration and Fast Reward Transfer

Model-based reinforcement learning (RL) is appealing because (i) it enab...
research
10/29/2020

Abstract Value Iteration for Hierarchical Reinforcement Learning

We propose a novel hierarchical reinforcement learning framework for con...
research
06/06/2022

Goal-Space Planning with Subgoal Models

This paper investigates a new approach to model-based reinforcement lear...
research
02/12/2022

Neural NID Rules

Abstract object properties and their relations are deeply rooted in huma...
research
07/26/2020

CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs

Meta-planning, or learning to guide planning from experience, is a promi...
research
10/30/2022

Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction

The Bayes-Adaptive Markov Decision Process (BAMDP) formalism pursues the...
research
06/03/2011

Policy Recognition in the Abstract Hidden Markov Model

In this paper, we present a method for recognising an agent's behaviour ...

Please sign up or login with your details

Forgot password? Click here to reset