UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild

05/18/2023
by   Can Qin, et al.
0

Achieving machine autonomy and human control often represent divergent objectives in the design of interactive AI systems. Visual generative foundation models such as Stable Diffusion show promise in navigating these goals, especially when prompted with arbitrary languages. However, they often fall short in generating images with spatial, structural, or geometric controls. The integration of such controls, which can accommodate various visual conditions in a single unified model, remains an unaddressed challenge. In response, we introduce UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework, while still allowing for arbitrary language prompts. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. To equip UniControl with the capacity to handle diverse visual conditions, we augment pretrained text-to-image diffusion models and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. Trained on nine unique C2I tasks, UniControl demonstrates impressive zero-shot generation abilities with unseen visual conditions. Experimental results show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes. This control versatility positions UniControl as a significant advancement in the realm of controllable visual generation.

READ FULL TEXT

page 2

page 14

page 15

page 17

page 19

page 20

page 21

page 22

research
02/16/2023

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

Recent advances in text-to-image generation with diffusion models presen...
research
05/29/2023

Controllable Text-to-Image Generation with GPT-4

Current text-to-image generation models often struggle to follow textual...
research
05/25/2023

DiffCLIP: Leveraging Stable Diffusion for Language Grounded 3D Classification

Large pre-trained models have had a significant impact on computer visio...
research
04/13/2023

Learning Controllable 3D Diffusion Models from Single-view Images

Diffusion models have recently become the de-facto approach for generati...
research
10/17/2021

MeronymNet: A Hierarchical Approach for Unified and Controllable Multi-Category Object Generation

We introduce MeronymNet, a novel hierarchical approach for controllable,...
research
03/14/2023

Interpretable ODE-style Generative Diffusion Model via Force Field Construction

For a considerable time, researchers have focused on developing a method...

Please sign up or login with your details

Forgot password? Click here to reset