Learning Speech-driven 3D Conversational Gestures from Video

02/13/2021
by   Ikhsanul Habibie, et al.
5

We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input. Our algorithm uses a CNN architecture that leverages the inherent correlation between facial expression and hand gestures. Synthesis of conversational body gestures is a multi-modal problem since many similar gestures can plausibly accompany the same input speech. To synthesize plausible body gestures in this setting, we train a Generative Adversarial Network (GAN) based model that measures the plausibility of the generated sequences of 3D body motion when paired with the input audio features. We also contribute a new way to create a large corpus of more than 33 hours of annotated body, hand, and face data from in-the-wild videos of talking people. To this end, we apply state-of-the-art monocular approaches for 3D body and hand pose estimation as well as dense 3D face performance capture to the video corpus. In this way, we can train on orders of magnitude more data than previous algorithms that resort to complex in-studio motion capture solutions, and thereby train more expressive synthesis algorithms. Our experiments and user study show the state-of-the-art quality of our speech-synthesized full 3D character animations.

READ FULL TEXT

page 4

page 8

page 10

page 11

page 12

research
07/23/2020

Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics

We propose a novel learned deep prior of body motion for 3D hand shape s...
research
03/10/2022

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis

Achieving realistic, vivid, and human-like synthesized conversational ge...
research
10/05/2019

To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations

Non verbal behaviours such as gestures, facial expressions, body posture...
research
06/11/2020

Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings

To enable more natural face-to-face interactions, conversational agents ...
research
07/01/2021

Passing a Non-verbal Turing Test: Evaluating Gesture Animations Generated from Speech

In real life, people communicate using both speech and non-verbal signal...
research
07/31/2021

Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning

We present a generative adversarial network to synthesize 3D pose sequen...
research
07/17/2020

Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses

In this paper, we propose a novel approach to convert given speech audio...

Please sign up or login with your details

Forgot password? Click here to reset