Human Action Recognition Using Deep Multilevel Multimodal (M2) Fusion of Depth and Inertial Sensors

10/25/2019
by   Zeeshan Ahmad, et al.
13

Multimodal fusion frameworks for Human Action Recognition (HAR) using depth and inertial sensor data have been proposed over the years. In most of the existing works, fusion is performed at a single level (feature level or decision level), missing the opportunity to fuse rich mid-level features necessary for better classification. To address this shortcoming, in this paper, we propose three novel deep multilevel multimodal fusion frameworks to capitalize on different fusion strategies at various stages and to leverage the superiority of multilevel fusion. At input, we transform the depth data into depth images called sequential front view images (SFIs) and inertial sensor data into signal images. Each input modality, depth and inertial, is further made multimodal by taking convolution with the Prewitt filter. Creating "modality within modality" enables further complementary and discriminative feature extraction through Convolutional Neural Networks (CNNs). CNNs are trained on input images of each modality to learn low-level, high-level and complex features. Learned features are extracted and fused at different stages of the proposed frameworks to combine discriminative and complementary information. These highly informative features are served as input to a multi-class Support Vector Machine (SVM). We evaluate the proposed frameworks on three publicly available multimodal HAR datasets, namely, UTD Multimodal Human Action Dataset (MHAD), Berkeley MHAD, and UTD-MHAD Kinect V2. Experimental results show the supremacy of the proposed fusion frameworks over existing methods.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

page 9

research
05/28/2021

Inertial Sensor Data To Image Encoding For Human Action Recognition

Convolutional Neural Networks (CNNs) are successful deep learning models...
research
08/22/2020

Multidomain Multimodal Fusion For Human Action Recognition Using Inertial Sensors

One of the major reasons for misclassification of multiplex actions duri...
research
10/29/2020

CNN based Multistage Gated Average Fusion (MGAF) for Human Action Recognition Using Depth and Inertial Sensors

Convolutional Neural Network (CNN) provides leverage to extract and fuse...
research
07/21/2021

ECG Heartbeat Classification Using Multimodal Fusion

Electrocardiogram (ECG) is an authoritative source to diagnose and count...
research
07/14/2022

Inertial Hallucinations – When Wearable Inertial Devices Start Seeing Things

We propose a novel approach to multimodal sensor fusion for Ambient Assi...
research
08/22/2020

Towards Improved Human Action Recognition Using Convolutional Neural Networks and Multimodal Fusion of Depth and Inertial Sensor Data

This paper attempts at improving the accuracy of Human Action Recognitio...
research
08/07/2016

Multiview Cauchy Estimator Feature Embedding for Depth and Inertial Sensor-Based Human Action Recognition

The ever-growing popularity of Kinect and inertial sensors has prompted ...

Please sign up or login with your details

Forgot password? Click here to reset