Modeling Grasp Motor Imagery through Deep Conditional Generative Models

01/11/2017
by   Matthew Veres, et al.
0

Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset