CoMoGAN: continuous model-guided image-to-image translation

03/11/2021
by   Fabio Pizzati, et al.
9

CoMoGAN is a continuous GAN relying on the unsupervised reorganization of the target data on a functional manifold. To that matter, we introduce a new Functional Instance Normalization layer and residual mechanism, which together disentangle image content from position on target manifold. We rely on naive physics-inspired models to guide the training while allowing private model/translations features. CoMoGAN can be used with any GAN backbone and allows new types of image translation, such as cyclic image translation like timelapse generation, or detached linear translation. On all datasets and metrics, it outperforms the literature. Our code is available at http://github.com/cv-rits/CoMoGAN .

READ FULL TEXT

page 1

page 5

page 7

page 8

research
08/22/2021

Graph2Pix: A Graph-Based Image to Image Translation Framework

In this paper, we propose a graph-based image-to-image translation frame...
research
04/16/2018

Composable Unpaired Image to Image Translation

There has been remarkable recent work in unpaired image-to-image transla...
research
03/16/2022

Dual Diffusion Implicit Bridges for Image-to-Image Translation

Common image-to-image translation methods rely on joint training over da...
research
11/26/2021

ManiFest: Manifold Deformation for Few-shot Image Translation

Most image-to-image translation methods require a large number of traini...
research
08/01/2019

Content and Colour Distillation for Learning Image Translations with the Spatial Profile Loss

Generative adversarial networks has emerged as a defacto standard for im...
research
05/04/2022

Hypercomplex Image-to-Image Translation

Image-to-image translation (I2I) aims at transferring the content repres...
research
07/29/2021

Guided Disentanglement in Generative Networks

Image-to-image translation (i2i) networks suffer from entanglement effec...

Please sign up or login with your details

Forgot password? Click here to reset