Learning visual servo policies via planner cloning

05/24/2020
by   Ulrich Viereck, et al.
1

Learning control policies for visual servoing in novel environments is an important problem. However, standard model-free policy learning methods are slow. This paper explores planner cloning: using behavior cloning to learn policies that mimic the behavior of a full-state motion planner in simulation. We propose Penalized Q Cloning (PQC), a new behavior cloning algorithm. We show that it outperforms several baselines and ablations on some challenging problems involving visual servoing in novel environments while avoiding obstacles. Finally, we demonstrate that these policies can be transferred effectively onto a real robotic platform, achieving approximately an 87 success rate both in simulation and on a real robot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset