BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping

06/08/2023
by   Jiatao Gu, et al.
4

Diffusion models have demonstrated excellent potential for generating diverse images. However, their performance often suffers from slow generation due to iterative denoising. Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few without significant quality degradation. However, existing distillation methods either require significant amounts of offline computation for generating synthetic training data from the teacher model or need to perform expensive online learning with the help of real data. In this work, we present a novel technique called BOOT, that overcomes these limitations with an efficient data-free distillation algorithm. The core idea is to learn a time-conditioned model that predicts the output of a pre-trained diffusion model teacher given any time step. Such a model can be efficiently trained based on bootstrapping from two consecutive sampled steps. Furthermore, our method can be easily adapted to large-scale text-to-image diffusion models, which are challenging for conventional methods given the fact that the training sets are often large and difficult to access. We demonstrate the effectiveness of our approach on several benchmark datasets in the DDIM setting, achieving comparable generation quality while being orders of magnitude faster than the diffusion teacher. The text-to-image results show that the proposed approach is able to handle highly complex distributions, shedding light on more efficient generative modeling.

READ FULL TEXT

page 2

page 16

page 17

page 18

page 27

page 29

page 30

page 32

research
10/06/2022

On Distillation of Guided Diffusion Models

Classifier-free guided diffusion models have recently been shown to be h...
research
09/19/2023

Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillation

Diffusion models power a vast majority of text-to-audio (TTA) generation...
research
06/01/2023

SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds

Text-to-image diffusion models can create stunning images from natural l...
research
06/02/2023

Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models

Knowledge distillation in neural networks refers to compressing a large ...
research
03/07/2023

TRACT: Denoising Diffusion Models with Transitive Closure Time-Distillation

Denoising Diffusion models have demonstrated their proficiency for gener...
research
07/13/2023

PC-Droid: Faster diffusion and improved quality for particle cloud generation

Building on the success of PC-JeDi we introduce PC-Droid, a substantiall...
research
05/18/2023

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling

Diffusion Probability Models (DPMs) have made impressive advancements in...

Please sign up or login with your details

Forgot password? Click here to reset