NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis

01/18/2023
by   Allan Zhou, et al.
0

Expert demonstrations are a rich source of supervision for training visual robotic manipulation policies, but imitation learning methods often require either a large number of demonstrations or expensive online expert supervision to learn reactive closed-loop behaviors. In this work, we introduce SPARTN (Synthetic Perturbations for Augmenting Robot Trajectories via NeRF): a fully-offline data augmentation scheme for improving robot policies that use eye-in-hand cameras. Our approach leverages neural radiance fields (NeRFs) to synthetically inject corrective noise into visual demonstrations, using NeRFs to generate perturbed viewpoints while simultaneously calculating the corrective actions. This requires no additional expert supervision or environment interaction, and distills the geometric information in NeRFs into a real-time reactive RGB-only policy. In a simulated 6-DoF visual grasping benchmark, SPARTN improves success rates by 2.8× over imitation learning without the corrective augmentations and even outperforms some methods that use online supervision. It additionally closes the gap between RGB-only and RGB-D success rates, eliminating the previous need for depth sensors. In real-world 6-DoF robotic grasping experiments from limited human demonstrations, our method improves absolute success rates by 22.5% on average, including objects that are traditionally challenging for depth-based methods. See video results at <https://bland.website/spartn>.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

page 13

research
10/31/2022

SEIL: Simulation-augmented Equivariant Imitation Learning

In robotic manipulation, acquiring samples is extremely expensive becaus...
research
02/15/2022

Bayesian Imitation Learning for End-to-End Mobile Manipulation

In this work we investigate and demonstrate benefits of a Bayesian appro...
research
03/25/2021

Adversarial Imitation Learning with Trajectorial Augmentation and Correction

Deep Imitation Learning requires a large number of expert demonstrations...
research
02/22/2022

Transporters with Visual Foresight for Solving Unseen Rearrangement Tasks

Rearrangement tasks have been identified as a crucial challenge for inte...
research
11/13/2020

Grasping with Chopsticks: Combating Covariate Shift in Model-free Imitation Learning for Fine Manipulation

Billions of people use chopsticks, a simple yet versatile tool, for fine...
research
08/28/2023

Uncertainty-driven Affordance Discovery for Efficient Robotics Manipulation

Robotics affordances, providing information about what actions can be ta...
research
12/05/2020

iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes

We present iGibson, a novel simulation environment to develop robotic so...

Please sign up or login with your details

Forgot password? Click here to reset