Learning Human Kinematics by Modeling Temporal Correlations between Joints for Video-based Human Pose Estimation

07/22/2022
by   Yonghao Dang, et al.
7

Estimating human poses from videos is critical in human-computer interaction. By precisely estimating human poses, the robot can provide an appropriate response to the human. Most existing approaches use the optical flow, RNNs, or CNNs to extract temporal features from videos. Despite the positive results of these attempts, most of them only straightforwardly integrate features along the temporal dimension, ignoring temporal correlations between joints. In contrast to previous methods, we propose a plug-and-play kinematics modeling module (KMM) based on the domain-cross attention mechanism to model the temporal correlation between joints across different frames explicitly. Specifically, the proposed KMM models the temporal correlation between any two joints by calculating their temporal similarity. In this way, KMM can learn the motion cues of each joint. Using the motion cues (temporal domain) and historical positions of joints (spatial domain), KMM can infer the initial positions of joints in the current frame in advance. In addition, we present a kinematics modeling network (KIMNet) based on the KMM for obtaining the final positions of joints by combining pose features and initial positions of joints. By explicitly modeling temporal correlations between joints, KIMNet can infer the occluded joints at present according to all joints at the previous moment. Furthermore, the KMM is achieved through an attention mechanism, which allows it to maintain the high resolution of features. Therefore, it can transfer rich historical pose information to the current frame, which provides effective pose information for locating occluded joints. Our approach achieves state-of-the-art results on two standard video-based pose estimation benchmarks. Moreover, the proposed KIMNet shows some robustness to the occlusion, demonstrating the effectiveness of the proposed method.

READ FULL TEXT

page 1

page 9

page 10

page 11

page 13

research
07/08/2021

Relation-Based Associative Joint Location for Human Pose Estimation in Videos

Video-based human pose estimation (HPE) is a vital yet challenging task....
research
11/17/2018

Explicit Pose Deformation Learning for Tracking Human Poses

We present a method for human pose tracking that learns explicitly about...
research
06/07/2021

Learning Dynamics via Graph Neural Networks for Human Pose Estimation and Tracking

Multi-person pose estimation and tracking serve as crucial steps for vid...
research
07/20/2022

OTPose: Occlusion-Aware Transformer for Pose Estimation in Sparsely-Labeled Videos

Although many approaches for multi-human pose estimation in videos have ...
research
06/17/2021

Optical Mouse: 3D Mouse Pose From Single-View Video

We present a method to infer the 3D pose of mice, including the limbs an...
research
03/15/2023

Mutual Information-Based Temporal Difference Learning for Human Pose Estimation in Video

Temporal modeling is crucial for multi-frame human pose estimation. Most...
research
08/18/2023

ResQ: Residual Quantization for Video Perception

This paper accelerates video perception, such as semantic segmentation a...

Please sign up or login with your details

Forgot password? Click here to reset