Vision-and-Dialog Navigation

07/10/2019
by   Jesse Thomason, et al.
2

Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://github.com/mmurray/cvdn

READ FULL TEXT

page 2

page 12

research
03/15/2020

Vision-Dialog Navigation by Exploring Cross-modal Memory

Vision-dialog navigation posed as a new holy-grail task in vision-langua...
research
10/23/2020

The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation

Autonomous robot systems for applications from search and rescue to assi...
research
05/23/2023

R2H: Building Multimodal Navigation Helpers that Respond to Help

The ability to assist humans during a navigation task in a supportive ro...
research
08/18/2020

Describing Unseen Videos via Multi-Modal Cooperative Dialog Agents

With the arising concerns for the AI systems provided with direct access...
research
05/02/2020

RMM: A Recursive Mental Model for Dialog Navigation

Fluent communication requires understanding your audience. In the new co...
research
08/22/2023

Target-Grounded Graph-Aware Transformer for Aerial Vision-and-Dialog Navigation

This report details the methods of the winning entry of the AVDN Challen...
research
11/16/2020

Where Are You? Localization from Embodied Dialog

We present Where Are You? (WAY), a dataset of  6k dialogs in which two h...

Please sign up or login with your details

Forgot password? Click here to reset