OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav

03/14/2023
by   Karmesh Yadav, et al.
0

We present a single neural network architecture composed of task-agnostic components (ViTs, convolutions, and LSTMs) that achieves state-of-art results on both the ImageNav ("go to location in <this picture>") and ObjectNav ("find a chair") tasks without any task-specific modules like object detection, segmentation, mapping, or planning modules. Such general-purpose methods offer advantages of simplicity in design, positive scaling with available compute, and versatile applicability to multiple tasks. Our work builds upon the recent success of self-supervised learning (SSL) for pre-training vision transformers (ViT). However, while the training recipes for convolutional networks are mature and robust, the recipes for ViTs are contingent and brittle, and in the case of ViTs for visual navigation, yet to be fully discovered. Specifically, we find that vanilla ViTs do not outperform ResNets on visual navigation. We propose the use of a compression layer operating over ViT patch representations to preserve spatial information along with policy training improvements. These improvements allow us to demonstrate positive scaling laws for the first time in visual navigation tasks. Consequently, our model advances state-of-the-art performance on ImageNav from 54.2 against concurrent state-of-art on ObjectNav with success rate of 64.0 65.0 rather recommendations for training a general-purpose architecture that achieves state-of-art performance today and could serve as a strong baseline for future methods.

READ FULL TEXT

page 3

page 4

page 15

research
06/16/2022

Patch-level Representation Learning for Self-supervised Vision Transformers

Recent self-supervised learning (SSL) methods have shown impressive resu...
research
08/22/2022

Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks

A big convergence of language, vision, and multimodal pretraining is eme...
research
06/06/2017

A General-Purpose Tagger with Convolutional Neural Networks

We present a general-purpose tagger based on convolutional neural networ...
research
02/06/2018

Learning Image Representations by Completing Damaged Jigsaw Puzzles

In this paper, we explore methods of complicating self-supervised tasks ...
research
11/28/2022

Perceive, Ground, Reason, and Act: A Benchmark for General-purpose Visual Representation

Current computer vision models, unlike the human visual system, cannot y...
research
11/17/2018

Deep Comparison: Relation Columns for Few-Shot Learning

Few-shot deep learning is a topical challenge area for scaling visual re...
research
07/22/2020

CrossTransformers: spatially-aware few-shot transfer

Given new tasks with very little data–such as new classes in a classific...

Please sign up or login with your details

Forgot password? Click here to reset