Cross-Tool and Cross-Behavior Perceptual Knowledge Transfer for Grounded Object Recognition

by   Gyan Tatiya, et al.

Humans learn about objects via interaction and using multiple perceptions, such as vision, sound, and touch. While vision can provide information about an object's appearance, non-visual sensors, such as audio and haptics, can provide information about its intrinsic properties, such as weight, temperature, hardness, and the object's sound. Using tools to interact with objects can reveal additional object properties that are otherwise hidden (e.g., knives and spoons can be used to examine the properties of food, including its texture and consistency). Robots can use tools to interact with objects and gather information about their implicit properties via non-visual sensors. However, a robot's model for recognizing objects using a tool-mediated behavior does not generalize to a new tool or behavior due to differing observed data distributions. To address this challenge, we propose a framework to enable robots to transfer implicit knowledge about granular objects across different tools and behaviors. The proposed approach learns a shared latent space from multiple robots' contexts produced by respective sensory data while interacting with objects using tools. We collected a dataset using a UR5 robot that performed 5,400 interactions using 6 tools and 6 behaviors on 15 granular objects and tested our method on cross-tool and cross-behavioral transfer tasks. Our results show the less experienced target robot can benefit from the experience gained from the source robot and perform recognition on a set of novel objects. We have released the code, datasets, and additional results:


page 1

page 3

page 4


Transferring Implicit Knowledge of Non-Visual Object Properties Across Heterogeneous Robot Morphologies

Humans leverage multiple sensor modalities when interacting with objects...

How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning

Selection of appropriate tools and use of them when performing daily tas...

MOSAIC: Learning Unified Multi-Sensory Object Property Representations for Robot Perception

A holistic understanding of object properties across diverse sensory mod...

Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots

One of the open challenges in designing robots that operate successfully...

Object Permanence Through Audio-Visual Representations

As robots perform manipulation tasks and interact with objects, it is pr...

Object Properties Inferring from and Transfer for Human Interaction Motions

Humans regularly interact with their surrounding objects. Such interacti...

Transfer of Tool Affordance and Manipulation Cues with 3D Vision Data

Future service robots working in human environments, such as kitchens, w...

Please sign up or login with your details

Forgot password? Click here to reset