Fast-SNARF: A Fast Deformer for Articulated Neural Fields

11/28/2022
by   Xu Chen, et al.
0

Neural fields have revolutionized the area of 3D reconstruction and novel view synthesis of rigid scenes. A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space. We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space via iterative root finding. Fast-SNARF is a drop-in replacement in functionality to our previous work, SNARF, while significantly improving its computational efficiency. We contribute several algorithmic and implementation improvements over SNARF, yielding a speed-up of 150×. These improvements include voxel-based correspondence search, pre-computing the linear blend skinning function, and an efficient software implementation with CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of shape and skinning weights given deformed observations without correspondences (e.g. 3D meshes). Because learning of deformation maps is a crucial component in many 3D human avatar methods and since Fast-SNARF provides a computationally efficient solution, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.

READ FULL TEXT

page 7

page 9

research
05/06/2021

Animatable Neural Radiance Fields for Human Body Modeling

This paper addresses the challenge of reconstructing an animatable human...
research
06/15/2022

Neural Deformable Voxel Grid for Fast Optimization of Dynamic View Synthesis

Recently, Neural Radiance Fields (NeRF) is revolutionizing the task of n...
research
04/08/2021

SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes

Neural implicit surface representations have emerged as a promising para...
research
05/31/2022

DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes

Modeling dynamic scenes is important for many applications such as virtu...
research
08/16/2023

SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes

Existing methods for the 4D reconstruction of general, non-rigidly defor...
research
04/04/2023

MonoHuman: Animatable Human Neural Field from Monocular Video

Animating virtual avatars with free-view control is crucial for various ...
research
12/23/2021

BANMo: Building Animatable 3D Neural Models from Many Casual Videos

Prior work for articulated 3D shape reconstruction often relies on speci...

Please sign up or login with your details

Forgot password? Click here to reset