Multi-UAV Path Planning for Wireless Data Harvesting with Deep Reinforcement Learning

10/23/2020
by   Harald Bayerlein, et al.
5

Harvesting data from distributed Internet of Things (IoT) devices with multiple autonomous unmanned aerial vehicles (UAVs) is a challenging problem requiring flexible path planning methods. We propose a multi-agent reinforcement learning (MARL) approach that, in contrast to previous work, can adapt to profound changes in the scenario parameters defining the data harvesting mission, such as the number of deployed UAVs, number and position of IoT devices, or the maximum flying time, without the need to perform expensive recomputations or relearn control policies. We formulate the path planning problem for a cooperative, non-communicating, and homogeneous team of UAVs tasked with maximizing collected data from distributed IoT sensor nodes subject to flying time and collision avoidance constraints. The path planning problem is translated into a decentralized partially observable Markov decision process (Dec-POMDP), which we solve by training a double deep Q-network (DDQN) to approximate the optimal UAV control policy. By exploiting global-local maps of the environment that are fed into convolutional layers of the agents, we show that our proposed network architecture enables the agents to cooperate effectively by carefully dividing the data collection task among themselves, adapt to large state spaces, and make movement decisions that balance data collection goals, flight-time efficiency, and navigation constraints.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset