Leveraging Multiple Descriptive Features for Robust Few-shot Image Learning

07/10/2023
by   Zhili Feng, et al.
0

Modern image classification is based upon directly predicting model classes via large discriminative networks, making it difficult to assess the intuitive visual “features” that may constitute a classification decision. At the same time, recent works in joint visual language models such as CLIP provide ways to specify natural language descriptions of image classes but typically focus on providing single descriptions for each class. In this work, we demonstrate that an alternative approach, arguably more akin to our understanding of multiple “visual features” per class, can also provide compelling performance in the robust few-shot learning setting. In particular, we automatically enumerate multiple visual descriptions of each class – via a large language model (LLM) – then use a vision-image model to translate these descriptions to a set of multiple visual features of each image; we finally use sparse logistic regression to select a relevant subset of these features to classify each image. This both provides an “intuitive” set of relevant features for each class, and in the few-shot learning setting, outperforms standard approaches such as linear probing. When combined with finetuning, we also show that the method is able to outperform existing state-of-the-art finetuning approaches on both in-distribution and out-of-distribution performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset