HumanLiff: Layer-wise 3D Human Generation with Diffusion Model

by   Shoukang Hu, et al.

3D human generation from 2D images has achieved remarkable progress through the synergistic utilization of neural rendering and generative models. Existing 3D human generative models mainly generate a clothed 3D human as an undetectable 3D model in a single pass, while rarely considering the layer-wise nature of a clothed human body, which often consists of the human body and various clothes such as underwear, outerwear, trousers, shoes, etc. In this work, we propose HumanLiff, the first layer-wise 3D human generative model with a unified diffusion process. Specifically, HumanLiff firstly generates minimal-clothed humans, represented by tri-plane features, in a canonical space, and then progressively generates clothes in a layer-wise manner. In this way, the 3D human generation is thus formulated as a sequence of diffusion-based 3D conditional generation. To reconstruct more fine-grained 3D humans with tri-plane representation, we propose a tri-plane shift operation that splits each tri-plane into three sub-planes and shifts these sub-planes to enable feature grid subdivision. To further enhance the controllability of 3D generation with 3D layered conditions, HumanLiff hierarchically fuses tri-plane features and 3D layered conditions to facilitate the 3D diffusion model learning. Extensive experiments on two layer-wise 3D human datasets, SynBody (synthetic) and TightCap (real-world), validate that HumanLiff significantly outperforms state-of-the-art methods in layer-wise 3D human generation. Our code will be available at


page 2

page 7

page 8

page 9


Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback

Diffusion models have recently shown remarkable success in high-quality ...

Text2Layer: Layered Image Generation using Latent Diffusion Model

Layer compositing is one of the most popular image editing workflows amo...

IC3D: Image-Conditioned 3D Diffusion for Shape Generation

In the last years, Denoising Diffusion Probabilistic Models (DDPMs) obta...

Large-Vocabulary 3D Diffusion Model with Transformer

Creating diverse and high-quality 3D assets with an automatic generative...

Diffusion-HPC: Generating Synthetic Images with Realistic Humans

Recent text-to-image generative models have exhibited remarkable abiliti...

SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling

Synthetic data has emerged as a promising source for 3D human research a...

Towards Layer-wise Image Vectorization

Image rasterization is a mature technique in computer graphics, while im...

Please sign up or login with your details

Forgot password? Click here to reset