Robotic Navigation using Entropy-Based Exploration

06/17/2019
by   Muhammad Usama, et al.
0

Robotic navigation concerns the task in which a robot should be able to find a safe and feasible path and traverse between two points in a complex environment. We approach the problem of robotic navigation using reinforcement learning and use deep Q-networks to train agents to solve the task of robotic navigation. We compare the Entropy-Based Exploration (EBE) with the widely used ϵ-greedy exploration strategy by training agents using both of them in simulation. The trained agents are then tested on different versions of the environment to test the generalization ability of the learned policies. We also implement the learned policies on a real robot in complex real environment without any fine tuning and compare the effectiveness of the above-mentioned exploration strategies in the real world setting. Video showing experiments on TurtleBot3 platform is available at <https://youtu.be/NHT-EiN_4n8>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset