Q-learning Based System for Path Planning with UAV Swarms in Obstacle Environments

by   Alejandro Puente-Castro, et al.

Path Planning methods for autonomous control of Unmanned Aerial Vehicle (UAV) swarms are on the rise because of all the advantages they bring. There are more and more scenarios where autonomous control of multiple UAVs is required. Most of these scenarios present a large number of obstacles, such as power lines or trees. If all UAVs can be operated autonomously, personnel expenses can be decreased. In addition, if their flight paths are optimal, energy consumption is reduced. This ensures that more battery time is left for other operations. In this paper, a Reinforcement Learning based system is proposed for solving this problem in environments with obstacles by making use of Q-Learning. This method allows a model, in this particular case an Artificial Neural Network, to self-adjust by learning from its mistakes and achievements. Regardless of the size of the map or the number of UAVs in the swarm, the goal of these paths is to ensure complete coverage of an area with fixed obstacles for tasks, like field prospecting. Setting goals or having any prior information aside from the provided map is not required. For experimentation, five maps of different sizes with different obstacles were used. The experiments were performed with different number of UAVs. For the calculation of the results, the number of actions taken by all UAVs to complete the task in each experiment is taken into account. The lower the number of actions, the shorter the path and the lower the energy consumption. The results are satisfactory, showing that the system obtains solutions in fewer movements the more UAVs there are. For a better presentation, these results have been compared to another state-of-the-art approach.


UAV Swarm Path Planning with Reinforcement Learning for Field prospecting

Unmanned Aerial Vehicle (UAV) swarms adoption shows a steady growth amon...

Optimized Path Planning for Inspection by Unmanned Aerial Vehicles Swarm with Energy Constraints

Autonomous inspection of large geographical areas is a central requireme...

RELAX: Reinforcement Learning Enabled 2D-LiDAR Autonomous System for Parsimonious UAVs

Unmanned Aerial Vehicles (UAVs) have gained significant prominence in re...

Decentralized Multi-UAV Routing in the Presence of Disturbances

We introduce a decentralized and online path planning technique for a ne...

CVaR-based Flight Energy Risk Assessment for Multirotor UAVs using a Deep Energy Model

Energy management is a critical aspect of risk assessment for Uncrewed A...

Recreating Bat Behavior on Quad-rotor UAVs-A Simulation Approach

We develop an effective computer model to simulate sensing environments ...

Distributed Reinforcement Learning for Flexible and Efficient UAV Swarm Control

Over the past few years, the use of swarms of Unmanned Aerial Vehicles (...

Please sign up or login with your details

Forgot password? Click here to reset