Local and Global GANs with Semantic-Aware Upsampling for Image Generation

02/28/2022
by   Hao Tang, et al.
0

In this paper, we address the task of semantic-guided image generation. One challenge common to most existing image-level generation methods is the difficulty in generating small objects and detailed local textures. To address this, in this work we consider generating images using local context. As such, we design a local class-specific generative network using semantic maps as guidance, which separately constructs and learns subgenerators for different classes, enabling it to capture finer details. To learn more discriminative class-specific feature representations for the local generation, we also propose a novel classification module. To combine the advantages of both global image-level and local class-specific generation, a joint generation network is designed with an attention fusion module and a dual-discriminator structure embedded. Lastly, we propose a novel semantic-aware upsampling method, which has a larger receptive field and can take far-away pixels that are semantically related for feature upsampling, enabling it to better preserve semantic consistency for instances with the same semantic labels. Extensive experiments on two image generation tasks show the superior performance of the proposed method. State-of-the-art results are established by large margins on both tasks and on nine challenging public benchmarks. The source code and trained models are available at https://github.com/Ha0Tang/LGGAN.

READ FULL TEXT

page 5

page 8

page 10

page 11

page 12

page 13

page 14

page 16

research
12/27/2019

Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

In this paper, we address the task of semantic-guided scene generation. ...
research
03/31/2020

Edge Guided GANs with Semantic Preserving for Semantic Image Synthesis

We propose a novel Edge guided Generative Adversarial Network (EdgeGAN) ...
research
03/14/2019

MirrorGAN: Learning Text-to-image Generation by Redescription

Generating an image from a given text description has two goals: visual ...
research
02/07/2020

Image Fine-grained Inpainting

Image inpainting techniques have shown promising improvement with the as...
research
08/03/2023

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

In this work, we make the first attempt to evaluate LLMs in a more chall...
research
11/10/2022

StyleNAT: Giving Each Head a New Perspective

Image generation has been a long sought-after but challenging task, and ...
research
07/27/2022

Generator Knows What Discriminator Should Learn in Unconditional GANs

Recent methods for conditional image generation benefit from dense super...

Please sign up or login with your details

Forgot password? Click here to reset