Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder

09/17/2021
by   Clément Rolinat, et al.
0

Grasp planning and most specifically the grasp space exploration is still an open issue in robotics. This article presents a data-driven oriented methodology to model the grasp space of a multi-fingered adaptive gripper for known objects. This method relies on a limited dataset of manually specified expert grasps, and uses variational autoencoder to learn grasp intrinsic features in a compact way from a computational point of view. The learnt model can then be used to generate new non-learnt gripper configurations to explore the grasp space.

READ FULL TEXT

page 2

page 3

page 4

research
09/20/2021

Human Initiated Grasp Space Exploration Algorithm for an Underactuated Robot Gripper Using Variational Autoencoder

Grasp planning and most specifically the grasp space exploration is stil...
research
01/23/2020

Semi-supervised Grasp Detection by Representation Learning in a Vector Quantized Latent Space

Determining quality grasps from an image is an important area of researc...
research
11/18/2020

ACRONYM: A Large-Scale Grasp Dataset Based on Simulation

We introduce ACRONYM, a dataset for robot grasp planning based on physic...
research
09/12/2018

Attention based visual analysis for fast grasp planning with multi-fingered robotic hand

We present an attention based visual analysis framework to compute grasp...
research
12/31/2018

A dataset of 40K naturalistic 6-degree-of-freedom robotic grasp demonstrations

Modern approaches to grasp planning often involve deep learning. However...
research
06/29/2018

Workspace Aware Online Grasp Planning

This work provides a framework for a workspace aware online grasp planne...
research
08/05/2018

3D Conceptual Design Using Deep Learning

This article proposes a data-driven methodology to achieve a fast design...

Please sign up or login with your details

Forgot password? Click here to reset