Rethinking the Objectives of Vector-Quantized Tokenizers for Image Synthesis

by   Yuchao Gu, et al.

Vector-Quantized (VQ-based) generative models usually consist of two basic components, i.e., VQ tokenizers and generative transformers. Prior research focuses on improving the reconstruction fidelity of VQ tokenizers but rarely examines how the improvement in reconstruction affects the generation ability of generative transformers. In this paper, we surprisingly find that improving the reconstruction fidelity of VQ tokenizers does not necessarily improve the generation. Instead, learning to compress semantic features within VQ tokenizers significantly improves generative transformers' ability to capture textures and structures. We thus highlight two competing objectives of VQ tokenizers for image synthesis: semantic compression and details preservation. Different from previous work that only pursues better details preservation, we propose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance the two objectives. In the first phase, we propose a semantic-enhanced perceptual loss for better semantic compression. In the second phase, we fix the encoder and codebook, but enhance and finetune the decoder to achieve better details preservation. The proposed SeQ-GAN greatly improves VQ-based generative models and surpasses the GAN and Diffusion Models on both unconditional and conditional image generation. Our SeQ-GAN (364M) achieves Frechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on 256x256 ImageNet generation, a remarkable improvement over VIT-VQGAN (714M), which obtains 11.2 FID and 97.2 IS.


page 15

page 17

page 18

page 19

page 20

page 21

page 22

page 23


High-Fidelity Image Compression with Score-based Generative Models

Despite the tremendous success of diffusion generative models in text-to...

MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis

Generative modeling and representation learning are two key tasks in com...

Improving the Speed and Quality of GAN by Adversarial Training

Generative adversarial networks (GAN) have shown remarkable results in i...

Vector-quantized Image Modeling with Improved VQGAN

Pretraining language models with next-token prediction on massive text c...

Semantic Image Synthesis via Diffusion Models

Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkabl...

Semantic Image Synthesis with Semantically Coupled VQ-Model

Semantic image synthesis enables control over unconditional image genera...

Please sign up or login with your details

Forgot password? Click here to reset