Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation

11/27/2017
by   Chao Wang, et al.
0

Image-to-image translation has been made much progress with embracing Generative Adversarial Networks (GANs). However, it's still very challenging for translation tasks that require high-quality, especially at high-resolution and photo-reality. In this paper, we present Discriminative Region Proposal Adversarial Networks (DRPANs) with three components: a generator, a discriminator and a reviser, for high-quality image-to-image translation. To reduce the artifacts and blur problems while translation, based on GAN, we explore a patch discriminator to find and extract the most artificial image patch by sliding the output score map with a window, and map the proposed image patch onto the synthesized fake image as our discriminative region. We then mask the corresponding real image using the discriminative region to obtain a fake-mask real image. For providing constructive revisions to generator, we propose a reviser for GANs to distinguish the real from the fake-mask real for producing realistic details and serve as auxiliaries to generate high-quality translation results. Experiments on a variety of image-to-image translation tasks and datasets validate that our method outperforms state-of-the-art translation methods for producing high-quality translation results in terms of both human perceptual studies and automatic quantitative measures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset