NBMOD: Find It and Grasp It in Noisy Background

06/17/2023
by   Boyuan Cao, et al.
0

Grasping objects is a fundamental yet important capability of robots, and many tasks such as sorting and picking rely on this skill. The prerequisite for stable grasping is the ability to correctly identify suitable grasping positions. However, finding appropriate grasping points is challenging due to the diverse shapes, varying density distributions, and significant differences between the barycenter of various objects. In the past few years, researchers have proposed many methods to address the above-mentioned issues and achieved very good results on publicly available datasets such as the Cornell dataset and the Jacquard dataset. The problem is that the backgrounds of Cornell and Jacquard datasets are relatively simple - typically just a whiteboard, while in real-world operational environments, the background could be complex and noisy. Moreover, in real-world scenarios, robots usually only need to grasp fixed types of objects. To address the aforementioned issues, we proposed a large-scale grasp detection dataset called NBMOD: Noisy Background Multi-Object Dataset for grasp detection, which consists of 31,500 RGB-D images of 20 different types of fruits. Accurate prediction of angles has always been a challenging problem in the detection task of oriented bounding boxes. This paper presents a Rotation Anchor Mechanism (RAM) to address this issue. Considering the high real-time requirement of robotic systems, we propose a series of lightweight architectures called RA-GraspNet (GraspNet with Rotation Anchor): RARA (network with Rotation Anchor and Region Attention), RAST (network with Rotation Anchor and Semi Transformer), and RAGT (network with Rotation Anchor and Global Transformer) to tackle this problem. Among them, the RAGT-3/3 model achieves an accuracy of 99 our code are available at https://github.com/kmittle/Grasp-Detection-NBMOD.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 9

page 11

page 12

research
03/06/2018

Fully Convolutional Grasp Detection Network with Oriented Anchor Box

In this paper, we present a real-time approach to predict multiple grasp...
research
09/08/2018

A Real-time Robotic Grasp Approach with Oriented Anchor Box

Grasp is an essential skill for robots to interact with humans and the e...
research
02/24/2022

When Transformer Meets Robotic Grasping: Exploits Context for Efficient Grasp Detection

In this paper, we present a transformer-based architecture, namely TF-Gr...
research
07/25/2022

GE-Grasp: Efficient Target-Oriented Grasping in Dense Clutter

Grasping in dense clutter is a fundamental skill for autonomous robots. ...
research
01/04/2021

Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter

General robot grasping in clutter requires the ability to synthesize gra...
research
10/01/2019

Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video

Pushing is a fundamental robotic skill. Existing work has shown how to e...
research
09/18/2019

Grid Anchor based Image Cropping: A New Benchmark and An Efficient Model

Image cropping aims to improve the composition as well as aesthetic qual...

Please sign up or login with your details

Forgot password? Click here to reset