Geometry-Aware Adaptation for Pretrained Models

07/23/2023
by   Nicholas Roberts, et al.
0

Machine learning models – including prominent zero-shot models – are often trained on datasets whose labels are only a small proportion of a larger label space. Such spaces are commonly equipped with a metric that relates the labels via distances between them. We propose a simple approach to exploit this information to adapt the trained model to reliably predict new classes – or, in the case of zero-shot prediction, to improve its performance – without any additional training. Our technique is a drop-in replacement of the standard prediction rule, swapping argmax with the Fréchet mean. We provide a comprehensive theoretical analysis for this approach, studying (i) learning-theoretic results trading off label space diameter, sample complexity, and model dimension, (ii) characterizations of the full range of scenarios in which it is possible to predict any unobserved class, and (iii) an optimal active learning-like next class selection procedure to obtain optimal training classes for when it is not possible to predict the entire range of unobserved classes. Empirically, using easily-available external metrics, our proposed approach, Loki, gains up to 29.7 and scales to hundreds of thousands of classes. When no such metric is available, Loki can use self-derived metrics from class embeddings and obtains a 10.5

READ FULL TEXT

page 22

page 23

page 24

research
09/12/2022

VL-Taboo: An Analysis of Attribute-based Zero-shot Capabilities of Vision-Language Models

Vision-language models trained on large, randomly collected data had sig...
research
02/06/2023

CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets

Open vocabulary models (e.g. CLIP) have shown strong performance on zero...
research
11/03/2022

Zero-shot Video Moment Retrieval With Off-the-Shelf Models

For the majority of the machine learning community, the expensive nature...
research
10/27/2022

Towards Reliable Zero Shot Classification in Self-Supervised Models with Conformal Prediction

Self-supervised models trained with a contrastive loss such as CLIP have...
research
12/17/2021

Data Efficient Language-supervised Zero-shot Recognition with Optimal Transport Distillation

Traditional computer vision models are trained to predict a fixed set of...
research
10/19/2022

Attaining Class-level Forgetting in Pretrained Model using Few Samples

In order to address real-world problems, deep learning models are jointl...
research
10/04/2018

A Machine Learning-based Recommendation System for Swaptions Strategies

Derivative traders are usually required to scan through hundreds, even t...

Please sign up or login with your details

Forgot password? Click here to reset