Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

by   Amin Jourabloo, et al.

Social presence, the feeling of being there with a real person, will fuel the next generation of communication systems driven by digital humans in virtual reality (VR). The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models. However, these PS models are time-consuming to build and are typically trained with limited data variability, which results in poor generalization and robustness. Major sources of variability that affects the accuracy of facial expression transfer algorithms include using different VR headsets (e.g., camera configuration, slop of the headset), facial appearance changes over time (e.g., beard, make-up), and environmental factors (e.g., lighting, backgrounds). This is a major drawback for the scalability of these models in VR. This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture (MIA) trained with specialized augmentation strategies. MIA drives the shape component of the avatar from three cameras in the VR headset (two eyes, one mouth), in untrained subjects, using minimal personalized information (i.e., neutral 3D mesh shape). Similarly, if the PS texture decoder is available, MIA is able to drive the full avatar (shape+texture) robustly outperforming PS models in challenging scenarios. Our key contribution to improve robustness and generalization, is that our method implicitly decouples, in an unsupervised manner, the facial expression from nuisance factors (e.g., headset, environment, facial appearance). We demonstrate the superior performance and robustness of the proposed method versus state-of-the-art PS approaches in a variety of experiments.


page 1

page 2

page 3

page 4

page 5

page 6

page 8


High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

3D video avatars can empower virtual communications by providing compres...

Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets based on Transfer Learning

Facial expressions of emotion are a major channel in our daily communica...

Brain-Computer Interface in Virtual Reality

We study the performance of brain computer interface (BCI) system in a v...

3D Face Reconstruction with Region Based Best Fit Blending Using Mobile Phone for Virtual Reality Based Social Media

The use of virtual reality (VR) is exponentially increasing and due to t...

Neural Relighting and Expression Transfer On Video Portraits

Photo-realistic video portrait reenactment benefits virtual production a...

An Efficient Integration of Disentangled Attended Expression and Identity FeaturesFor Facial Expression Transfer andSynthesis

In this paper, we present an Attention-based Identity Preserving Generat...

Attention based Occlusion Removal for Hybrid Telepresence Systems

Traditionally, video conferencing is a widely adopted solution for telec...

Please sign up or login with your details

Forgot password? Click here to reset