Visionary: Vision architecture discovery for robot learning

03/26/2021
by   Iretiayo Akinola, et al.
0

We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs. Our approach automatically designs architectures while training on the task - discovering novel ways of combining and attending image feature representations with actions as well as features from previous layers. The obtained new architectures demonstrate better task success rates, in some cases with a large margin, compared to a recent high performing baseline. Our real robot experiments also confirm that it improves grasping performance by 6 demonstrate a successful neural architecture search and attention connectivity search for a real-robot task.

READ FULL TEXT

page 1

page 6

research
09/24/2020

Disentangled Neural Architecture Search

Neural architecture search has shown its great potential in various area...
research
07/08/2021

Bag of Tricks for Neural Architecture Search

While neural architecture search methods have been successful in previou...
research
04/21/2021

Making Differentiable Architecture Search less local

Neural architecture search (NAS) is a recent methodology for automating ...
research
02/27/2021

Neural Architecture Search From Task Similarity Measure

In this paper, we propose a neural architecture search framework based o...
research
10/27/2018

Training Frankenstein's Creature to Stack: HyperTree Architecture Search

We propose HyperTrees for the low cost automatic design of multiple-inpu...
research
06/03/2019

Discovering Neural Wirings

The success of neural networks has driven a shift in focus from feature ...

Please sign up or login with your details

Forgot password? Click here to reset