Deep Grasp: Detection and Localization of Grasps with Deep Neural Networks
A deep learning architecture is proposed to predict graspable locations for robotic manipulation. We consider a more realistic situation that none or multiple objects can be in a scene. By transforming grasp configuration regression into classification problem with null hypothesis competition, the deep neural network with RGB-D image input predicts multiple grasp candidates on a single unseen object, as well as predict grasp candidates on multiple novel objects in a single shot. We perform extensive experiments with our framework on different scenarios, including no object, single object, and multi-objects. We compare with state-of-the-art approaches on Cornell dataset, and show we can achieve 96.0% and 96.1% accuracy on image-wise split and object-wise split, respectively.
READ FULL TEXT