Contrastive Adapters for Foundation Model Group Robustness

07/14/2022
by   Michael Zhang, et al.
8

While large pretrained foundation models (FMs) have shown remarkable zero-shot classification robustness to dataset-level distribution shifts, their robustness to subpopulation or group shifts is relatively underexplored. We study this problem, and find that FMs such as CLIP may not be robust to various group shifts. Across 9 robustness benchmarks, zero-shot classification with their embeddings results in gaps of up to 80.7 percentage points (pp) between average and worst-group accuracy. Unfortunately, existing methods to improve robustness require retraining, which can be prohibitively expensive on large foundation models. We also find that efficient ways to improve model inference (e.g., via adapters, lightweight networks with FM embeddings as inputs) do not consistently improve and can sometimes hurt group robustness compared to zero-shot (e.g., increasing the accuracy gap by 50.1 pp on CelebA). We thus develop an adapter training strategy to effectively and efficiently improve FM group robustness. Our motivating observation is that while poor robustness results from groups in the same class being embedded far apart in the foundation model "embedding space," standard adapter training may not bring these points closer together. We thus propose contrastive adapting, which trains adapters with contrastive learning to bring sample embeddings close to both their ground-truth class embeddings and other sample embeddings in the same class. Across the 9 benchmarks, our approach consistently improves group robustness, raising worst-group accuracy by 8.5 to 56.0 pp over zero-shot. Our approach is also efficient, doing so without any FM finetuning and only a fixed set of frozen FM embeddings. On benchmarks such as Waterbirds and CelebA, this leads to worst-group accuracy comparable to state-of-the-art methods that retrain entire models, while only training ≤1

READ FULL TEXT

page 4

page 19

research
12/14/2022

Understanding Zero-Shot Adversarial Robustness for Large-Scale Models

Pretrained large-scale vision-language models like CLIP have exhibited s...
research
12/01/2022

Finetune like you pretrain: Improved finetuning of zero-shot vision models

Finetuning image-text models such as CLIP achieves state-of-the-art accu...
research
03/03/2022

Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations

Spurious correlations pose a major challenge for robust machine learning...
research
09/19/2022

Importance Tempering: Group Robustness for Overparameterized Models

Although overparameterized models have shown their success on many machi...
research
03/29/2016

Latent Embeddings for Zero-shot Classification

We present a novel latent embedding model for learning a compatibility f...
research
08/02/2023

More Context, Less Distraction: Visual Classification by Inferring and Conditioning on Contextual Attributes

CLIP, as a foundational vision language model, is widely used in zero-sh...
research
10/27/2021

Simple data balancing achieves competitive worst-group-accuracy

We study the problem of learning classifiers that perform well across (k...

Please sign up or login with your details

Forgot password? Click here to reset