A Framework for Sequential Planning in Multi-Agent Settings

09/09/2011
by   P. Doshi, et al.
0

This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian updates to maintain their beliefs over time. The solutions map belief states to actions. Models of other agents may include their belief states and are related to agent types considered in games of incomplete information. We express the agents autonomy by postulating that their models are not directly manipulable or observable by other agents. We show that important properties of POMDPs, such as convergence of value iteration, the rate of convergence, and piece-wise linearity and convexity of the value functions carry over to our framework. Our approach complements a more traditional approach to interactive settings which uses Nash equilibria as a solution paradigm. We seek to avoid some of the drawbacks of equilibria which may be non-unique and do not capture off-equilibrium behaviors. We do so at the cost of having to represent, process and continuously revise models of other agents. Since the agents beliefs may be arbitrarily nested, the optimal solutions to decision making problems are only asymptotically computable. However, approximate belief updates and approximately optimal plans are computable. We illustrate our framework using a simple application domain, and we show examples of belief updates and value functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2014

Monte Carlo Sampling Methods for Approximating Interactive POMDPs

Partially observable Markov decision processes (POMDPs) provide a princi...
research
12/03/2018

A Unified Approach to Dynamic Decision Problems with Asymmetric Information - Part I: Non-Strategic Agents

We study a general class of dynamic multi-agent decision problems with a...
research
02/03/2021

Neural Recursive Belief States in Multi-Agent Reinforcement Learning

In multi-agent reinforcement learning, the problem of learning to act is...
research
04/27/2023

Decentralized Inference via Capability Type Structures in Cooperative Multi-Agent Systems

This work studies the problem of ad hoc teamwork in teams composed of ag...
research
01/15/2014

Networks of Influence Diagrams: A Formalism for Representing Agents' Beliefs and Decision-Making Processes

This paper presents Networks of Influence Diagrams (NID), a compact, nat...
research
09/16/2016

A Formal Solution to the Grain of Truth Problem

A Bayesian agent acting in a multi-agent environment learns to predict t...
research
10/06/2021

Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

Many AI applications involve the interaction of multiple autonomous agen...

Please sign up or login with your details

Forgot password? Click here to reset