Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

11/06/2017
by   A. Zlatintsi, et al.
0

We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5 I-Support system on elderly users accomplishes up to 84 modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available.

READ FULL TEXT

page 1

page 2

page 3

research
05/03/2019

MobiKa - Low-Cost Mobile Robot for Human-Robot Interaction

One way to allow elderly people to stay longer in their homes is to use ...
research
06/26/2019

Analyzing Verbal and Nonverbal Features for Predicting Group Performance

This work analyzes the efficacy of verbal and nonverbal features of grou...
research
12/17/2022

iCub! Do you recognize what I am doing?: multimodal human action recognition on multisensory-enabled iCub robot

This study uses multisensory data (i.e., color and depth) to recognize h...
research
07/07/2022

Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in Groups

We develop data-driven models to predict when a robot should feed during...
research
12/05/2020

SpeakingFaces: A Large-Scale Multimodal Dataset of Voice Commands with Visual and Thermal Video Streams

We present SpeakingFaces as a publicly-available large-scale multimodal ...
research
12/27/2020

ROS for Human-Robot Interaction

Integrating real-time, complex social signal processing into robotic sys...
research
09/15/2019

MuMMER: Socially Intelligent Human-Robot Interaction in Public Spaces

In the EU-funded MuMMER project, we have developed a social robot design...

Please sign up or login with your details

Forgot password? Click here to reset