Task-Oriented Communication Design at Scale

by   Arsham Mostaani, et al.

With countless promising applications in various domains such as IoT and industry 4.0, task-oriented communication design (TOCD) is getting accelerated attention from the research community. This paper presents a novel approach for designing scalable task-oriented quantization and communications in cooperative multi-agent systems (MAS). The proposed approach utilizes the TOCD framework and the value of information (VoI) concept to enable efficient communication of quantized observations among agents while maximizing the average return performance of the MAS, a parameter that quantifies the MAS's task effectiveness. The computational complexity of learning the VoI, however, grows exponentially with the number of agents. Thus, we propose a three-step framework: i) learning the VoI (using reinforcement learning (RL)) for a two-agent system, ii) designing the quantization policy for an N-agent MAS using the learned VoI for a range of bit-budgets and, (iii) learning the agents' control policies using RL while following the designed quantization policies in the earlier step. We observe that one can reduce the computational cost of obtaining the value of information by exploiting insights gained from studying a similar two-agent system - instead of the original N-agent system. We then quantize agents' observations such that their more valuable observations are communicated more precisely. Our analytical results show the applicability of the proposed framework under a wide range of problems. Numerical results show striking improvements in reducing the computational complexity of obtaining VoI needed for the TOCD in a MAS problem without compromising the average return performance of the MAS.


page 1

page 2

page 3

page 4


Task-Effective Compression of Observations for the Centralized Control of a Multi-agent System Over Bit-Budgeted Channels

We consider a task-effective quantization problem that arises when multi...

Heterogeneous Multi-Agent Reinforcement Learning via Mirror Descent Policy Optimization

This paper presents an extension of the Mirror Descent method to overcom...

Decentralized Cooperative Communication-less Multi-Agent Task Assignment with Monte-Carlo Tree Search

Cooperative task assignment is an important subject in multi-agent syste...

Dimension-Free Rates for Natural Policy Gradient in Multi-Agent Reinforcement Learning

Cooperative multi-agent reinforcement learning is a decentralized paradi...

A Scalable Graph-Theoretic Distributed Framework for Cooperative Multi-Agent Reinforcement Learning

The main challenge of large-scale cooperative multi-agent reinforcement ...

Brainstorm/J: a Java Framework for Intelligent Agents

Despite the effort of many researchers in the area of multi-agent system...

Please sign up or login with your details

Forgot password? Click here to reset