Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

by   Wentao Bao, et al.

The compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts (i.e., sliced tomatoes), where the models are learned only from the seen compositions (i.e., sliced potatoes and red tomatoes). Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives (i.e., states and objects), are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distribution, and 2) enhance the compositionality of the softly prompted class embedding. Moreover, a stochastic logit mixup strategy is proposed to dynamically fuse the decisions from the predictions in the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better compositional zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts. The code and models will be publicly released.


page 1

page 2

page 3

page 4


Prompting Large Pre-trained Vision-Language Models For Compositional Concept Learning

This work explores the zero-shot compositional learning ability of large...

Learning Attention Propagation for Compositional Zero-Shot Learning

Compositional zero-shot learning aims to recognize unseen compositions o...

DRPT: Disentangled and Recurrent Prompt Tuning for Compositional Zero-Shot Learning

Compositional Zero-shot Learning (CZSL) aims to recognize novel concepts...

How Do In-Context Examples Affect Compositional Generalization?

Compositional generalization–understanding unseen combinations of seen p...

Decomposed Soft Prompt Guided Fusion Enhancing for Compositional Zero-Shot Learning

Compositional Zero-Shot Learning (CZSL) aims to recognize novel concepts...

Do Vision-Language Pretrained Models Learn Primitive Concepts?

Vision-language pretrained models have achieved impressive performance o...

A causal view of compositional zero-shot recognition

People easily recognize new visual categories that are new combinations ...

Please sign up or login with your details

Forgot password? Click here to reset