Adaptable and Verifiable BDI Reasoning

by   Peter Stringer, et al.

Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected. To achieve this, a system must first be capable of detecting such changes. In this position paper, we describe a system architecture for BDI autonomous agents capable of adapting to changes in a dynamic environment and outline the required research. Specifically, we describe an agent-maintained self-model with accompanying theories of durative actions and learning new action descriptions in BDI systems.


page 1

page 2

page 3

page 4


NovPhy: A Testbed for Physical Reasoning in Open-world Environments

Due to the emergence of AI systems that interact with the physical envir...

AI Autonomy: Self-Initiation, Adaptation and Continual Learning

As more and more AI agents are used in practice, it is time to think abo...

Scalable Decision-Theoretic Planning in Open and Typed Multiagent Systems

In open agent systems, the set of agents that are cooperating or competi...

Amortized Q-learning with Model-based Action Proposals for Autonomous Driving on Highways

Well-established optimization-based methods can guarantee an optimal tra...

Universal Memory Architectures for Autonomous Machines

We propose a self-organizing memory architecture for perceptual experien...

Gamified and Self-Adaptive Applications for the Common Good: Research Challenges Ahead

Motivational digital systems offer capabilities to engage and motivate e...

Counterfactual Reasoning and Learning Systems

This work shows how to leverage causal inference to understand the behav...

Please sign up or login with your details

Forgot password? Click here to reset