Deeply Coupled Cross-Modal Prompt Learning

05/29/2023
by   Xuejing Liu, et al.
0

Recent advancements in multimodal foundation models (e.g., CLIP) have excelled in zero-shot generalization. Prompt tuning involved in the knowledge transfer from foundation models to downstream tasks has gained significant attention recently. Existing prompt-tuning methods in cross-modal learning, however, either solely focus on language branch, or learn vision-language interaction in a shallow mechanism. In this context, we propose a Deeply coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly accommodates the interplay between vision and language with a Cross-Modal Prompt Attention (CMPA) mechanism, which enables the mutual exchange of respective representation through a well-connected multi-head attention module progressively and strongly. We then conduct comprehensive few-shot learning experiments on 11 image classification datasets and analyze the robustness to domain shift as well. Thorough experimental analysis evidently demonstrates the superb few-shot generalization and compelling domain adaption capacity of a well-executed DCP. The code can be found at https://github.com/GingL/CMPA.

READ FULL TEXT
research
01/16/2023

Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models

The ability to quickly learn a new task with minimal instruction - known...
research
06/06/2023

MolFM: A Multimodal Molecular Foundation Model

Molecular knowledge resides within three different modalities of informa...
research
05/24/2022

mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

Large-scale pretrained foundation models have been an emerging paradigm ...
research
03/27/2023

Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning

Recent compositional zero-shot learning (CZSL) methods adapt pre-trained...
research
05/15/2023

Mode Approximation Makes Good Vision-Language Prompts

With the advance of large-scale model technologies, parameter-efficient ...
research
01/26/2023

Cross Modal Global Local Representation Learning from Radiology Reports and X-Ray Chest Images

Deep learning models can be applied successfully in real-work problems; ...
research
05/19/2023

Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution Alignment

Pre-trained vision-language models have inspired much research on few-sh...

Please sign up or login with your details

Forgot password? Click here to reset