Learning Vision-Guided Dynamic Locomotion Over Challenging Terrains

09/09/2021
by   Zhaocheng Liu, et al.
0

Legged robots are becoming increasingly powerful and popular in recent years for their potential to bring the mobility of autonomous agents to the next level. This work presents a deep reinforcement learning approach that learns a robust Lidar-based perceptual locomotion policy in a partially observable environment using Proximal Policy Optimisation. Visual perception is critical to actively overcome challenging terrains, and to do so, we propose a novel learning strategy: Dynamic Reward Strategy (DRS), which serves as effective heuristics to learn a versatile gait using a neural network architecture without the need to access the history data. Moreover, in a modified version of the OpenAI gym environment, the proposed work is evaluated with scores over 90 success rate in all tested challenging terrains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset