ProTeCt: Prompt Tuning for Hierarchical Consistency

06/04/2023
by   Tz-Ying Wu, et al.
0

Large visual-language models, like CLIP, learn generalized representations and have shown promising zero-shot performance. Few-shot adaptation methods, based on prompt tuning, have also been shown to further improve performance on downstream datasets. However, these models are not hierarchically consistent. Frequently, they infer incorrect labels at coarser taxonomic class levels, even when the inference at the leaf level (original class labels) is correct. This is problematic, given their support for open set classification and, in particular, open-grained classification, where practitioners define label sets at various levels of granularity. To address this problem, we propose a prompt tuning technique to calibrate the hierarchical consistency of model predictions. A set of metrics of hierarchical consistency, the Hierarchical Consistent Accuracy (HCA) and the Mean Treecut Accuracy (MTA), are first proposed to benchmark model performance in the open-granularity setting. A prompt tuning technique, denoted as Prompt Tuning for Hierarchical Consistency (ProTeCt), is then proposed to calibrate classification across all possible label set granularities. Results show that ProTeCt can be combined with existing prompt tuning methods to significantly improve open-granularity classification performance without degradation of the original classification performance at the leaf level.

READ FULL TEXT

page 7

page 9

page 25

research
02/06/2023

CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets

Open vocabulary models (e.g. CLIP) have shown strong performance on zero...
research
07/20/2022

On Label Granularity and Object Localization

Weakly supervised object localization (WSOL) aims to learn representatio...
research
12/12/2022

Doubly Right Object Recognition: A Why Prompt for Visual Rationales

Many visual recognition models are evaluated only on their classificatio...
research
11/09/2022

Zero-Label Prompt Selection

Natural language prompts have been shown to facilitate cross-task genera...
research
05/24/2023

A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification

In recent years, large language models (LLMs) have achieved strong perfo...
research
12/01/2022

Improving Zero-Shot Models with Label Distribution Priors

Labeling large image datasets with attributes such as facial age or obje...
research
03/09/2023

R-Tuning: Regularized Prompt Tuning in Open-Set Scenarios

In realistic open-set scenarios where labels of a part of testing data a...

Please sign up or login with your details

Forgot password? Click here to reset