Topologically Consistent Multi-View Face Inference Using Volumetric Sampling

10/06/2021
by   Tianye Li, et al.
6

High-fidelity face digitization solutions often combine multi-view stereo (MVS) techniques for 3D reconstruction and a non-rigid registration step to establish dense correspondence across identities and expressions. A common problem is the need for manual clean-up after the MVS step, as 3D scans are typically affected by noise and outliers and contain hairy surface regions that need to be cleaned up by artists. Furthermore, mesh registration tends to fail for extreme facial expressions. Most learning-based methods use an underlying 3D morphable model (3DMM) to ensure robustness, but this limits the output accuracy for extreme facial expressions. In addition, the global bottleneck of regression architectures cannot produce meshes that tightly fit the ground truth surfaces. We propose ToFu, Topologically consistent Face from multi-view, a geometry inference framework that can produce topologically consistent meshes across facial identities and expressions using a volumetric representation instead of an explicit underlying 3DMM. Our novel progressive mesh generation network embeds the topological structure of the face in a feature volume, sampled from geometry-aware local features. A coarse-to-fine architecture facilitates dense and accurate facial mesh predictions in a consistent mesh topology. ToFu further captures displacement maps for pore-level geometric details and facilitates high-quality rendering in the form of albedo and specular reflectance maps. These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering. We demonstrate state-of-the-art geometric and correspondence accuracy, while only taking 0.385 seconds to compute a mesh with 10K vertices, which is three orders of magnitude faster than traditional techniques. The code and the model are available for research purposes at https://tianyeli.github.io/tofu.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 13

page 15

page 16

page 17

research
06/12/2023

Instant Multi-View Head Capture through Learnable Registration

Existing methods for capturing datasets of 3D heads in dense semantic co...
research
01/07/2021

PVA: Pixel-aligned Volumetric Avatars

Acquisition and rendering of photo-realistic human heads is a highly cha...
research
12/23/2022

Neural Volumetric Blendshapes: Computationally Efficient Physics-Based Facial Blendshapes

Computationally weak systems and demanding graphical applications are st...
research
10/01/2020

Dynamic Facial Asset and Rig Generation from a Single Scan

The creation of high-fidelity computer-generated (CG) characters used in...
research
07/22/2022

Multiface: A Dataset for Neural Face Rendering

Photorealistic avatars of human faces have come a long way in recent yea...
research
11/21/2022

Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars

3D-aware generative adversarial networks (GANs) synthesize high-fidelity...
research
04/02/2020

Learning Formation of Physically-Based Face Attributes

Based on a combined data set of 4000 high resolution facial scans, we in...

Please sign up or login with your details

Forgot password? Click here to reset