Learning better generative models for dexterous, single-view grasping of novel objects

by   Marek Kopicki, et al.

This paper concerns the problem of how to learn to grasp dexterously, so as to be able to then grasp novel objects seen only from a single view-point. Recently, progress has been made in data-efficient learning of generative grasp models which transfer well to novel objects. These generative grasp models are learned from demonstration (LfD). One weakness is that, as this paper shall show, grasp transfer under challenging single view conditions is unreliable. Second, the number of generative model elements rises linearly in the number of training examples. This, in turn, limits the potential of these generative models for generalisation and continual improvement. In this paper, it is shown how to address these problems. Several technical contributions are made: (i) a view-based model of a grasp; (ii) a method for combining and compressing multiple grasp models; (iii) a new way of evaluating contacts that is used both to generate and to score grasps. These, together, improve both grasp performance and reduce the number of models learned for grasp transfer. These advances, in turn, also allow the introduction of autonomous training, in which the robot learns from self-generated grasps. Evaluation on a challenging test set shows that, with innovations (i)-(iii) deployed, grasp transfer success rises from 55.1 These differences are statistically significant. In total, across all experiments, 539 test grasps were executed on real objects.


page 2

page 4

page 6

page 12

page 14


Accelerating Grasp Exploration by Leveraging Learned Priors

The ability of robots to grasp novel objects has industry applications i...

Generative grasp synthesis from demonstration using parametric mixtures

We present a parametric formulation for learning generative models for g...

Grasp Transfer based on Self-Aligning Implicit Representations of Local Surfaces

Objects we interact with and manipulate often share similar parts, such ...

Deep Dexterous Grasping of Novel Objects from a Single View

Dexterous grasping of a novel object given a single view is an open prob...

Automatic Detection of Myocontrol Failures Based upon Situational Context Information

Myoelectric control systems for assistive devices are still unreliable. ...

Optimizing Correlated Graspability Score and Grasp Regression for Better Grasp Prediction

Grasping objects is one of the most important abilities to master for a ...

Modeling Grasp Motor Imagery through Deep Conditional Generative Models

Grasping is a complex process involving knowledge of the object, the sur...

Please sign up or login with your details

Forgot password? Click here to reset