Generating Quality Grasp Rectangle using Pix2Pix GAN for Intelligent Robot Grasping

by   Vandana Kushwaha, et al.

Intelligent robot grasping is a very challenging task due to its inherent complexity and non availability of sufficient labelled data. Since making suitable labelled data available for effective training for any deep learning based model including deep reinforcement learning is so crucial for successful grasp learning, in this paper we propose to solve the problem of generating grasping Poses/Rectangles using a Pix2Pix Generative Adversarial Network (Pix2Pix GAN), which takes an image of an object as input and produces the grasping rectangle tagged with the object as output. Here, we have proposed an end-to-end grasping rectangle generating methodology and embedding it to an appropriate place of an object to be grasped. We have developed two modules to obtain an optimal grasping rectangle. With the help of the first module, the pose (position and orientation) of the generated grasping rectangle is extracted from the output of Pix2Pix GAN, and then the extracted grasp pose is translated to the centroid of the object, since here we hypothesize that like the human way of grasping of regular shaped objects, the center of mass/centroids are the best places for stable grasping. For other irregular shaped objects, we allow the generated grasping rectangles as it is to be fed to the robot for grasp execution. The accuracy has significantly improved for generating the grasping rectangle with limited number of Cornell Grasping Dataset augmented by our proposed approach to the extent of 87.79 show that our proposed generative model based approach gives the promising results in terms of executing successful grasps for seen as well as unseen objects.


page 8

page 13

page 14


Robotic Grasp Manipulation Using Evolutionary Computing and Deep Reinforcement Learning

Intelligent Object manipulation for grasping is a challenging problem fo...

Real-time Grasp Pose Estimation for Novel Objects in Densely Cluttered Environment

Grasping of novel objects in pick and place applications is a fundamenta...

Vision-Based Intelligent Robot Grasping Using Sparse Neural Network

In the modern era of Deep Learning, network parameters play a vital role...

Deep Dexterous Grasping of Novel Objects from a Single View

Dexterous grasping of a novel object given a single view is an open prob...

GraspCaps: Capsule Networks Are All You Need for Grasping Familiar Objects

As robots become more accessible outside of industrial settings, the nee...

Modeling Grasp Motor Imagery through Deep Conditional Generative Models

Grasping is a complex process involving knowledge of the object, the sur...

Please sign up or login with your details

Forgot password? Click here to reset