Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

04/12/2023
by   James Seale Smith, et al.
0

Recent works demonstrate a remarkable ability to customize text-to-image diffusion models while only providing a few example images. What happens if you try to customize such models using multiple, fine-grained concepts in a sequential (i.e., continual) manner? In our work, we show that recent state-of-the-art customization of text-to-image models suffer from catastrophic forgetting when new concepts arrive sequentially. Specifically, when adding a new concept, the ability to generate high quality images of past, similar concepts degrade. To circumvent this forgetting, we propose a new method, C-LoRA, composed of a continually self-regularized low-rank adaptation in cross attention layers of the popular Stable Diffusion model. Furthermore, we use customization prompts which do not include the word of the customized object (i.e., "person" for a human face dataset) and are initialized as completely random embeddings. Importantly, our method induces only marginal additional parameter costs and requires no storage of user data for replay. We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification. The high achieving performance of C-LoRA in two separate domains positions it as a compelling solution for a wide range of applications, and we believe it has significant potential for practical impact.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

page 8

page 15

research
09/08/2023

Create Your World: Lifelong Text-to-Image Diffusion

Text-to-image generative models can produce diverse high-quality images ...
research
03/27/2023

Exploring Continual Learning of Diffusion Models

Diffusion models have achieved remarkable success in generating high-qua...
research
05/29/2023

Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models

Public large-scale text-to-image diffusion models, such as Stable Diffus...
research
11/17/2022

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

Recently, large-scale pre-trained Vision-and-Language (VL) foundation mo...
research
08/02/2023

Training Data Protection with Compositional Diffusion Models

We introduce Compartmentalized Diffusion Models (CDM), a method to train...
research
05/17/2023

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to ...
research
07/17/2020

A biological plausible audio-visual integration model for continual lifelong learning

The problem of catastrophic forgetting can be traced back to the 1980s, ...

Please sign up or login with your details

Forgot password? Click here to reset