Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds

11/30/2022
by   Shoujie Li, et al.
0

The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7 visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34 framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset