Exploiting Fast Decaying and Locality in Multi-Agent MDP with Tree Dependence Structure

09/15/2019
by   Guannan Qu, et al.
0

This paper considers a multi-agent Markov Decision Process (MDP), where there are n agents and each agent i is associated with a state s_i and action a_i taking values from a finite set. Though the global state space size and action space size are exponential in n, we impose local dependence structures and focus on local policies that only depend on local states, and we propose a method that finds nearly optimal local policies in polynomial time (in n) when the dependence structure is a one directional tree. The algorithm builds on approximated reward functions which are evaluated using locally truncated Markov process. Further, under some special conditions, we prove that the gap between the approximated reward function and the true reward function is decaying exponentially fast as the length of the truncated Markov process gets longer. The intuition behind this is that under some assumptions, the effect of agent interactions decays exponentially in the distance between agents, which we term "fast decaying property".

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset