DeepAI AI Chat
Log In Sign Up

Retrieving Multimodal Information for Augmented Generation: A Survey

by   Ruochen Zhao, et al.

In this survey, we review methods that retrieve multimodal knowledge to assist and augment generative models. This group of works focuses on retrieving grounding contexts from external sources, including images, codes, tables, graphs, and audio. As multimodal learning and generative AI have become more and more impactful, such retrieval augmentation offers a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. We provide an in-depth review of retrieval-augmented generation in different modalities and discuss potential future directions. As this is an emerging field, we continue to add new papers and methods.


page 1

page 2

page 3

page 4


MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text

While language Models store a massive amount of world knowledge implicit...

Retrieval-Augmented Multimodal Language Modeling

Recent multimodal models such as DALL-E and CM3 have achieved remarkable...

A Survey on Audio Synthesis and Audio-Visual Multimodal Processing

With the development of deep learning and artificial intelligence, audio...

A Survey on Retrieval-Augmented Text Generation

Recently, retrieval-augmented text generation attracted increasing atten...

Multimodal Automated Fact-Checking: A Survey

Misinformation, i.e. factually incorrect information, is often conveyed ...

Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning

Recent advances in generative modeling have spurred a resurgence in the ...

A Review of Learning with Deep Generative Models from perspective of graphical modeling

This document aims to provide a review on learning with deep generative ...