ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environments

11/15/2020
by   Hyounghun Kim, et al.
3

For embodied agents, navigation is an important ability but not an isolated goal. Agents are also expected to perform specific tasks after reaching the target location, such as picking up objects and assembling them into a particular arrangement. We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigation-and-assembly task, named ArraMon. During this task, the agent (similar to a PokeMON GO player) is asked to find and collect different target objects one-by-one by navigating based on natural language instructions in a complex, realistic outdoor environment, but then also ARRAnge the collected objects part-by-part in an egocentric grid-layout environment. To support this task, we implement a 3D dynamic environment simulator and collect a dataset (in English; and also extended to Hindi) with human-written navigation and assembling instructions, and the corresponding ground truth trajectories. We also filter the collected instructions via a verification stage, leading to a total of 7.7K task instances (30.8K instructions and paths). We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work. Our dataset, simulator, and code are publicly available at: https://arramonunc.github.io

READ FULL TEXT

page 1

page 4

page 9

page 12

page 13

page 15

page 17

page 18

research
06/17/2023

MO-VLN: A Multi-Task Benchmark for Open-set Zero-Shot Vision-and-Language Navigation

Given a natural language, a general robot has to comprehend the instruct...
research
01/23/2018

CHALET: Cornell House Agent Learning Environment

We present CHALET, a 3D house simulator with support for navigation and ...
research
07/12/2017

Source-Target Inference Models for Spatial Instruction Understanding

Models that can execute natural language instructions for situated robot...
research
10/14/2022

Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language Navigation

We address a practical yet challenging problem of training robot agents ...
research
09/19/2019

RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation

Following navigation instructions in natural language requires a composi...
research
07/14/2023

Switching Head-Tail Funnel UNITER for Dual Referring Expression Comprehension with Fetch-and-Carry Tasks

This paper describes a domestic service robot (DSR) that fetches everyda...
research
07/11/2019

Effective and General Evaluation for Instruction Conditioned Navigation using Dynamic Time Warping

In instruction conditioned navigation, agents interpret natural language...

Please sign up or login with your details

Forgot password? Click here to reset