Object Permanence Through Audio-Visual Representations

10/20/2020
by   Fanjun Bu, et al.
0

As robots perform manipulation tasks and interact with objects, it is probable that they accidentally drop objects that subsequently bounce out of their visual fields (e.g., due to an inadequate grasp of an unfamiliar object). To enable robots to recover from such errors, we draw upon the concept of object permanence—objects remain in existence even when they are not being sensed (e.g., seen) directly. In particular, we developed a multimodal neural network model—using a partial, observed bounce trajectory and the audio resulting from drop impact as its inputs—to predict the full bounce trajectory and the end location of a dropped object. We empirically show that: (1) our multimodal method predicted end locations close in proximity (i.e., within the visual field of the robot's wrist camera) to the actual locations and (2) the robot was able to retrieve dropped objects by applying minimal vision-based pick-up adjustments. Additionally, we show that our multimodal method outperformed the vision-only and audio-only baselines in retrieving dropped objects. Our results provide insights in enabling object permanence for robots and offer foundations for ensuring robust robot autonomy in task execution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset