iCub! Do you recognize what I am doing?: multimodal human action recognition on multisensory-enabled iCub robot

12/17/2022
by   Kas Kniesmeijer, et al.
0

This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset