BANMo: Building Animatable 3D Neural Models from Many Casual Videos

by   Gengshan Yang, et al.

Prior work for articulated 3D shape reconstruction often relies on specialized sensors (e.g., synchronized multi-camera systems), or pre-built 3D deformable models (e.g., SMAL or SMPL). Such methods are not able to scale to diverse sets of objects in the wild. We present BANMo, a method that requires neither a specialized sensor nor a pre-defined template shape. BANMo builds high-fidelity, articulated 3D models (including shape and animatable skinning weights) from many monocular casual videos in a differentiable rendering framework. While the use of many videos provides more coverage of camera views and object articulations, they introduce significant challenges in establishing correspondence across scenes with different backgrounds, illumination conditions, etc. Our key insight is to merge three schools of thought; (1) classic deformable shape models that make use of articulated bones and blend skinning, (2) volumetric neural radiance fields (NeRFs) that are amenable to gradient-based optimization, and (3) canonical embeddings that generate correspondences between pixels and an articulated model. We introduce neural blend skinning models that allow for differentiable and invertible articulated deformations. When combined with canonical embeddings, such models allow us to establish dense correspondences across videos that can be self-supervised with cycle consistency. On real and synthetic datasets, BANMo shows higher-fidelity 3D reconstructions than prior works for humans and animals, with the ability to render realistic images from novel viewpoints and poses. Project webpage: .


page 1

page 7

page 8


DOVE: Learning Deformable 3D Objects by Watching Videos

Learning deformable 3D objects from 2D images is an extremely ill-posed ...

NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild

Recent history has seen a tremendous growth of work exploring implicit r...

Topologically-Aware Deformation Fields for Single-View 3D Reconstruction

We present a framework for learning 3D object shapes and dense cross-obj...

Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction

We propose the Canonical 3D Deformer Map, a new representation of the 3D...

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition

We present Vid2Avatar, a method to learn human avatars from monocular in...

Deformable Neural Radiance Fields

We present the first method capable of photorealistically reconstructing...

Fast-SNARF: A Fast Deformer for Articulated Neural Fields

Neural fields have revolutionized the area of 3D reconstruction and nove...

Please sign up or login with your details

Forgot password? Click here to reset