Spatial Temporal Transformer Network for Skeleton-based Action Recognition

08/17/2020
by   Chiara Plizzari, et al.
6

Skeleton-based Human Activity Recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views and complex background. In particular, Spatial-Temporal Graph Convolutional Networks (ST-GCN) demonstrated to be effective in learning both spatial and temporal dependencies on non-Euclidean data such as skeleton graphs. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem, especially how to extract effective information from joint motion patterns and their correlations. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network, whose performance is evaluated on three large-scale datasets, NTU-RGB+D 60, NTU-RGB+D 120 and Kinetics Skeleton 400, outperforming the state-of-the-art on NTU-RGB+D w.r.t. models using the same input data consisting of joint information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset