Efficient Multimodal Fusion via Interactive Prompting

04/13/2023
by   Yaowei Li, et al.
0

Large-scale pre-training has brought unimodal fields such as computer vision and natural language processing to a new era. Following this trend, the size of multi-modal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models for downstream tasks. In this paper, we propose an efficient and flexible multimodal fusion method, namely PMF, tailored for fusing unimodally pre-trained transformers. Specifically, we first present a modular multimodal fusion framework that exhibits high flexibility and facilitates mutual interactions among different modalities. In addition, we disentangle vanilla prompts into three types in order to learn different optimizing objectives for multimodal learning. It is also worth noting that we propose to add prompt vectors only on the deep layers of the unimodal transformers, thus significantly reducing the training memory usage. Experiment results show that our proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3 trainable parameters and up to 66

READ FULL TEXT
research
03/15/2022

Modular and Parameter-Efficient Multimodal Fusion with Prompting

Recent research has made impressive progress in large-scale multimodal p...
research
02/20/2023

Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

With the urgent demand for generalized deep models, many pre-trained big...
research
02/18/2022

VLP: A Survey on Vision-Language Pre-training

In the past few years, the emergence of pre-training models has brought ...
research
04/12/2022

Are Multimodal Transformers Robust to Missing Modality?

Multimodal data collected from the real world are often imperfect due to...
research
11/23/2021

Sparse Fusion for Multimodal Transformers

Multimodal classification is a core task in human-centric machine learni...
research
04/11/2023

MoMo: A shared encoder Model for text, image and multi-Modal representations

We propose a self-supervised shared encoder model that achieves strong r...
research
05/04/2023

Multimodal Understanding Through Correlation Maximization and Minimization

Multimodal learning has mainly focused on learning large models on, and ...

Please sign up or login with your details

Forgot password? Click here to reset