A Perspective on Objects and Systematic Generalization in Model-Based RL

06/03/2019
by   Sjoerd van Steenkiste, et al.
4

In order to meet the diverse challenges in solving many real-world problems, an intelligent agent has to be able to dynamically construct a model of its environment. Objects facilitate the modular reuse of prior knowledge and the combinatorial construction of such models. In this work, we argue that dynamically bound features (objects) do not simply emerge in connectionist models of the world. We identify several requirements that need to be fulfilled in overcoming this limitation and highlight corresponding inductive biases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2018

Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions

Common-sense physical reasoning is an essential ingredient for any intel...
research
06/28/2023

Structure in Reinforcement Learning: A Survey and Open Problems

Reinforcement Learning (RL), bolstered by the expressive capabilities of...
research
12/10/2022

Relate to Predict: Towards Task-Independent Knowledge Representations for Reinforcement Learning

Reinforcement Learning (RL) can enable agents to learn complex tasks. Ho...
research
06/06/2022

Is a Modular Architecture Enough?

Inspired from human cognition, machine learning systems are gradually re...
research
06/15/2023

Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation

Humans have the remarkable ability to navigate through unfamiliar enviro...
research
08/31/2021

Interactive Machine Comprehension with Dynamic Knowledge Graphs

Interactive machine reading comprehension (iMRC) is machine comprehensio...

Please sign up or login with your details

Forgot password? Click here to reset