Generating Images with Multimodal Language Models

05/26/2023
by   Jing Yu Koh, et al.
0

We propose a method to fuse frozen text-only large language models (LLMs) with pre-trained image encoder and decoder models, by mapping between their embedding spaces. Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue. Ours is the first approach capable of conditioning on arbitrarily interleaved image and text inputs to generate coherent image (and text) outputs. To achieve strong performance on image generation, we propose an efficient mapping network to ground the LLM to an off-the-shelf text-to-image generation model. This mapping network translates hidden representations of text into the embedding space of the visual models, enabling us to leverage the strong text representations of the LLM for visual outputs. Our approach outperforms baseline generation models on tasks with longer and more complex language. In addition to novel image generation, our model is also capable of image retrieval from a prespecified dataset, and decides whether to retrieve or generate at inference time. This is done with a learnt decision module which conditions on the hidden representations of the LLM. Our model exhibits a wider range of capabilities compared to prior multimodal language models. It can process image-and-text inputs, and produce retrieved images, generated images, and generated text – outperforming non-LLM based generation models across several text-to-image tasks that measure context dependence.

READ FULL TEXT

page 7

page 17

page 18

research
01/31/2023

Grounding Language Models to Images for Multimodal Generation

We propose an efficient method to ground pretrained text-only language m...
research
05/19/2023

LLM Itself Can Read and Generate CXR Images

Building on the recent remarkable development of large language models (...
research
05/24/2023

MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation

The recent popularity of text-to-image diffusion models (DM) can largely...
research
03/15/2023

GPT-4 Technical Report

We report the development of GPT-4, a large-scale, multimodal model whic...
research
07/24/2017

Full-Network Embedding in a Multimodal Embedding Pipeline

The current state-of-the-art for image annotation and image retrieval ta...
research
04/18/2022

VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance

Generating and editing images from open domain text prompts is a challen...
research
03/01/2022

CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP

Training a text-to-image generator in the general domain (e.g., Dall.e, ...

Please sign up or login with your details

Forgot password? Click here to reset