Learning View and Target Invariant Visual Servoing for Navigation

03/04/2020
by   Yimeng Li, et al.
0

The advances in deep reinforcement learning recently revived interest in data-driven learning based approaches to navigation. In this paper we propose to learn viewpoint invariant and target invariant visual servoing for local mobile robot navigation; given an initial view and the goal view or an image of a target, we train deep convolutional network controller to reach the desired goal. We present a new architecture for this task which rests on the ability of establishing correspondences between the initial and goal view and novel reward structure motivated by the traditional feedback control error. The advantage of the proposed model is that it does not require calibration and depth information and achieves robust visual servoing in a variety of environments and targets without any parameter fine tuning. We present comprehensive evaluation of the approach and comparison with other deep learning architectures as well as classical visual servoing methods in visually realistic simulation environment. The presented model overcomes the brittleness of classical visual servoing based methods and achieves significantly higher generalization capability compared to the previous learning approaches.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
08/08/2019

Vision-based Navigation Using Deep Reinforcement Learning

Deep reinforcement learning (RL) has been successfully applied to a vari...
research
12/20/2017

Sim2Real View Invariant Visual Servoing by Recurrent Control

Humans are remarkably proficient at controlling their limbs and tools fr...
research
07/16/2022

Role of reward shaping in object-goal navigation

Deep reinforcement learning approaches have been a popular method for vi...
research
09/16/2016

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning

Two less addressed issues of deep reinforcement learning are (1) lack of...
research
04/22/2022

Transferring ConvNet Features from Passive to Active Robot Self-Localization: The Use of Ego-Centric and World-Centric Views

The training of a next-best-view (NBV) planner for visual place recognit...
research
07/19/2021

DeepSocNav: Social Navigation by Imitating Human Behaviors

Current datasets to train social behaviors are usually borrowed from sur...
research
01/13/2021

Memory-Augmented Reinforcement Learning for Image-Goal Navigation

In this work, we address the problem of image-goal navigation in the con...

Please sign up or login with your details

Forgot password? Click here to reset