Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos

by   Annie S. Chen, et al.

We are motivated by the goal of generalist robots that can complete a wide range of tasks across many environments. Critical to this is the robot's ability to acquire some metric of task success or reward, which is necessary for reinforcement learning, planning, or knowing when to ask for help. For a general-purpose robot operating in the real world, this reward function must also be able to generalize broadly across environments, tasks, and objects, while depending only on on-board sensor observations (e.g. RGB images). While deep learning on large and diverse datasets has shown promise as a path towards such generalization in computer vision and natural language, collecting high quality datasets of robotic interaction at scale remains an open challenge. In contrast, "in-the-wild" videos of humans (e.g. YouTube) contain an extensive collection of people doing interesting tasks across a diverse range of settings. In this work, we propose a simple approach, Domain-agnostic Video Discriminator (DVD), that learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task, and can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos. We find that by leveraging diverse human datasets, this reward function (a) can generalize zero shot to unseen environments, (b) generalize zero shot to unseen tasks, and (c) can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.


page 1

page 3

page 4

page 5

page 6

page 7

page 14

page 16


Learning Reward Functions for Robotic Manipulation by Observing Humans

Observing a human demonstrator manipulate objects provides a rich, scala...

RT-1: Robotics Transformer for Real-World Control at Scale

By transferring knowledge from large, diverse, task-agnostic datasets, m...

Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision

Learning-based methods in robotics hold the promise of generalization, b...

Vision-Language Models as Success Detectors

Detecting successful behaviour is crucial for training intelligent agent...

GPLAC: Generalizing Vision-Based Robotic Skills using Weakly Labeled Images

We tackle the problem of learning robotic sensorimotor control policies ...

Affordances from Human Videos as a Versatile Representation for Robotics

Building a robot that can understand and learn to interact by watching h...

Scoring-Aggregating-Planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization

Humans can learn task-agnostic priors from interactive experience and ut...

Please sign up or login with your details

Forgot password? Click here to reset