Learning to Move with Affordance Maps

01/08/2020
by   William Qi, et al.
20

The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that learns a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.

READ FULL TEXT

page 3

page 4

page 5

page 8

page 12

page 13

page 14

page 15

research
02/13/2020

BADGR: An Autonomous Self-Supervised Learning-Based Navigation System

Mobile robot navigation is typically regarded as a geometric problem, in...
research
12/02/2022

Navigating to Objects in the Real World

Semantic navigation is necessary to deploy mobile robots in uncontrolled...
research
05/17/2022

GraphMapper: Efficient Visual Navigation by Scene Graph Generation

Understanding the geometric relationships between objects in a scene is ...
research
08/02/2020

Deep-Reinforcement-Learning-Based Semantic Navigation of Mobile Robots in Dynamic Environments

Mobile robots have gained increased importance within industrial tasks s...
research
07/20/2020

Seeing the Un-Scene: Learning Amodal Semantic Maps for Room Navigation

We introduce a learning-based approach for room navigation using semanti...
research
10/16/2022

D2SLAM: Semantic visual SLAM based on the influence of Depth for Dynamic environments

Taking into account the dynamics of the scene is the most effective solu...
research
09/12/2022

A Review on Visual-SLAM: Advancements from Geometric Modelling to Learning-based Semantic Scene Understanding

Simultaneous Localisation and Mapping (SLAM) is one of the fundamental p...

Please sign up or login with your details

Forgot password? Click here to reset