Single-view 3D Body and Cloth Reconstruction under Complex Poses

by   Nicolas Ugrinovic, et al.

Recent advances in 3D human shape reconstruction from single images have shown impressive results, leveraging on deep networks that model the so-called implicit function to learn the occupancy status of arbitrarily dense 3D points in space. However, while current algorithms based on this paradigm, like PiFuHD, are able to estimate accurate geometry of the human shape and clothes, they require high-resolution input images and are not able to capture complex body poses. Most training and evaluation is performed on 1k-resolution images of humans standing in front of the camera under neutral body poses. In this paper, we leverage publicly available data to extend existing implicit function-based models to deal with images of humans that can have arbitrary poses and self-occluded limbs. We argue that the representation power of the implicit function is not sufficient to simultaneously model details of the geometry and of the body pose. We, therefore, propose a coarse-to-fine approach in which we first learn an implicit function that maps the input image to a 3D body shape with a low level of detail, but which correctly fits the underlying human pose, despite its complexity. We then learn a displacement map, conditioned on the smoothed surface and on the input image, which encodes the high-frequency details of the clothes and body. In the experimental section, we show that this coarse-to-fine strategy represents a very good trade-off between shape detail and pose correctness, comparing favorably to the most recent state-of-the-art approaches. Our code will be made publicly available.


page 7

page 9


Neural Point-based Shape Modeling of Humans in Challenging Clothing

Parametric 3D body models like SMPL only represent minimally-clothed peo...

SHARP: Shape-Aware Reconstruction of People In Loose Clothing

3D human body reconstruction from monocular images is an interesting and...

xCloth: Extracting Template-free Textured 3D Clothes from a Monocular Image

Existing approaches for 3D garment reconstruction either assume a predef...

COAP: Compositional Articulated Occupancy of People

We present a novel neural implicit representation for articulated human ...

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

In this paper, we aim to create generalizable and controllable neural si...

High-Resolution Volumetric Reconstruction for Clothed Humans

We present a novel method for reconstructing clothed humans from a spars...

TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style

In this paper, we present TailorNet, a neural model which predicts cloth...

Please sign up or login with your details

Forgot password? Click here to reset