Image Generation and Translation with Disentangled Representations

by   Tobias Hinz, et al.

Generative models have made significant progress in the tasks of modeling complex data distributions such as natural images. The introduction of Generative Adversarial Networks (GANs) and auto-encoders lead to the possibility of training on big data sets in an unsupervised manner. However, for many generative models it is not possible to specify what kind of image should be generated and it is not possible to translate existing images into new images of similar domains. Furthermore, models that can perform image-to-image translation often need distinct models for each domain, making it hard to scale these systems to multiple domain image-to-image translation. We introduce a model that can do both, controllable image generation and image-to-image translation between multiple domains. We split our image representation into two parts encoding unstructured and structured information respectively. The latter is designed in a disentangled manner, so that different parts encode different image characteristics. We train an encoder to encode images into these representations and use a small amount of labeled data to specify what kind of information should be encoded in the disentangled part. A generator is trained to generate images from these representations using the characteristics provided by the disentangled part of the representation. Through this we can control what kind of images the generator generates, translate images between different domains, and even learn unknown data-generating factors while only using one single model.


page 5

page 6

page 7

page 8


Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation

State-of-the-art techniques in Generative Adversarial Networks (GANs) ha...

Inferencing Based on Unsupervised Learning of Disentangled Representations

Combining Generative Adversarial Networks (GANs) with encoders that lear...

Multimodal Image-to-Image Translation via a Single Generative Adversarial Network

Despite significant advances in image-to-image (I2I) translation with Ge...

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

Conditional Generative Adversarial Networks (GANs) for cross-domain imag...

Learning Disentangled Representations of Satellite Image Time Series

In this paper, we investigate how to learn a suitable representation of ...

What can we learn about a generated image corrupting its latent representation?

Generative adversarial networks (GANs) offer an effective solution to th...

Network Fusion for Content Creation with Conditional INNs

Artificial Intelligence for Content Creation has the potential to reduce...

Please sign up or login with your details

Forgot password? Click here to reset