On Distillation of Guided Diffusion Models

10/06/2022
by   Chenlin Meng, et al.
12

Classifier-free guided diffusion models have recently been shown to be highly effective at high-resolution image generation, and they have been widely used in large-scale diffusion frameworks including DALL-E 2, GLIDE and Imagen. However, a downside of classifier-free guided diffusion models is that they are computationally expensive at inference time since they require evaluating two diffusion models, a class-conditional model and an unconditional model, hundreds of times. To deal with this limitation, we propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from: Given a pre-trained classifier-free guided model, we first learn a single model to match the output of the combined conditional and unconditional models, and then progressively distill that model to a diffusion model that requires much fewer sampling steps. On ImageNet 64x64 and CIFAR-10, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from.

READ FULL TEXT

page 9

page 20

page 21

page 22

page 23

page 24

page 25

page 26

research
02/11/2022

Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality

Diffusion models have emerged as an expressive family of generative mode...
research
06/08/2023

BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping

Diffusion models have demonstrated excellent potential for generating di...
research
03/17/2023

FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model

Recently, conditional diffusion models have gained popularity in numerou...
research
04/14/2023

Towards Controllable Diffusion Models via Reward-Guided Exploration

By formulating data samples' formation as a Markov denoising process, di...
research
08/15/2023

DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using Pre-trained Diffusion Models

Given a classifier, the inherent property of semantic Out-of-Distributio...
research
06/22/2023

DiffWA: Diffusion Models for Watermark Attack

With the rapid development of deep neural networks(DNNs), many robust bl...
research
02/08/2023

Q-Diffusion: Quantizing Diffusion Models

Diffusion models have achieved great success in synthesizing diverse and...

Please sign up or login with your details

Forgot password? Click here to reset