Visual Object Networks: Image Generation with Disentangled 3D Representation

12/06/2018
by   Jun-Yan Zhu, et al.
8

Recent progress in deep generative models has led to tremendous breakthroughs in image generation. However, while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world. We present a new generative model, Visual Object Networks (VON), synthesizing natural images of objects with a disentangled 3D representation. Inspired by classic graphics rendering pipelines, we unravel our image formation process into three conditionally independent factors---shape, viewpoint, and texture---and present an end-to-end adversarial learning framework that jointly models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes that are indistinguishable from real shapes. It then renders the object's 2.5D sketches (i.e., silhouette and depth map) from its shape under a sampled viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches to generate natural images. The VON not only generates images that are more realistic than state-of-the-art 2D image synthesis methods, but also enables many 3D operations such as changing the viewpoint of a generated image, editing of shape and texture, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.

READ FULL TEXT

page 7

page 9

research
10/01/2019

Unsupervised Generative 3D Shape Learning from Natural Images

In this paper we present, to the best of our knowledge, the first method...
research
08/28/2018

3D-Aware Scene Manipulation via Inverse Graphics

We aim to obtain an interpretable, expressive and disentangled scene rep...
research
11/16/2020

Cycle-Consistent Generative Rendering for 2D-3D Modality Translation

For humans, visual understanding is inherently generative: given a 3D sh...
research
06/30/2020

Deep Geometric Texture Synthesis

Recently, deep generative adversarial networks for image generation have...
research
01/18/2022

GANmouflage: 3D Object Nondetection with Texture Fields

We propose a method that learns to camouflage 3D objects within scenes. ...
research
05/17/2019

Texture Fields: Learning Texture Representations in Function Space

In recent years, substantial progress has been achieved in learning-base...
research
10/06/2022

XDGAN: Multi-Modal 3D Shape Generation in 2D Space

Generative models for 2D images has recently seen tremendous progress in...

Please sign up or login with your details

Forgot password? Click here to reset