Arbitrary Style Guidance for Enhanced Diffusion-Based Text-to-Image Generation

11/14/2022
by   Zhihong Pan, et al.
0

Diffusion-based text-to-image generation models like GLIDE and DALLE-2 have gained wide success recently for their superior performance in turning complex text inputs into images of high quality and wide diversity. In particular, they are proven to be very powerful in creating graphic arts of various formats and styles. Although current models supported specifying style formats like oil painting or pencil drawing, fine-grained style features like color distributions and brush strokes are hard to specify as they are randomly picked from a conditional distribution based on the given text input. Here we propose a novel style guidance method to support generating images using arbitrary style guided by a reference image. The generation method does not require a separate style transfer model to generate desired styles while maintaining image quality in generated content as controlled by the text input. Additionally, the guidance method can be applied without a style reference, denoted as self style guidance, to generate images of more diverse styles. Comprehensive experiments prove that the proposed method remains robust and effective in a wide range of conditions, including diverse graphic art forms, image content types and diffusion models.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

research
09/04/2023

StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image Generation

This paper presents a LoRA-free method for stylized image generation tha...
research
04/13/2023

Expressive Text-to-Image Generation with Rich Text

Plain text has become a prevalent interface for text-to-image synthesis....
research
06/19/2023

Conditional Text Image Generation with Diffusion Models

Current text recognition systems, including those for handwritten script...
research
11/21/2022

DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Contrastive Prompt-Tuning

Large-scale text-to-image generation models have achieved remarkable pro...
research
10/19/2021

Fine-Grained Control of Artistic Styles in Image Generation

Recent advances in generative models and adversarial training have enabl...
research
03/15/2022

APRNet: Attention-based Pixel-wise Rendering Network for Photo-Realistic Text Image Generation

Style-guided text image generation tries to synthesize text image by imi...
research
09/13/2023

MagiCapture: High-Resolution Multi-Concept Portrait Customization

Large-scale text-to-image models including Stable Diffusion are capable ...

Please sign up or login with your details

Forgot password? Click here to reset