A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

by   Zhiheng Li, et al.

Machine learning models have been found to learn shortcuts – unintended decision rules that are unable to generalize – undermining models' reliability. Previous works address this problem under the tenuous assumption that only a single shortcut exists in the training data. Real-world images are rife with multiple visual cues from background to texture. Key to advancing the reliability of vision systems is understanding whether existing methods can overcome multiple shortcuts or struggle in a Whac-A-Mole game, i.e., where mitigating one shortcut amplifies reliance on others. To address this shortcoming, we propose two benchmarks: 1) UrbanCars, a dataset with precisely controlled spurious cues, and 2) ImageNet-W, an evaluation set based on ImageNet for watermark, a shortcut we discovered affects nearly every modern vision model. Along with texture and background, ImageNet-W allows us to study multiple shortcuts emerging from training on natural images. We find computer vision models, including large foundation models – regardless of training set, architecture, and supervision – struggle when multiple shortcuts are present. Even methods explicitly designed to combat shortcuts struggle in a Whac-A-Mole dilemma. To tackle this challenge, we propose Last Layer Ensemble, a simple-yet-effective method to mitigate multiple shortcuts without Whac-A-Mole behavior. Our results surface multi-shortcut mitigation as an overlooked challenge critical to advancing the reliability of vision systems. The datasets and code are released: https://github.com/facebookresearch/Whac-A-Mole.git.


page 2

page 3

page 4

page 5

page 15

page 20

page 21

page 22


Shape-Texture Debiased Neural Network Training

Shape and texture are two prominent and complementary cues for recognizi...

ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations

Deep learning vision systems are widely deployed across applications whe...

Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy

Computer vision technology is being used by many but remains representat...

Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning

In the literature of mitigating unfairness in machine learning, many fai...

Caption supervision enables robust learners

Vision language models like CLIP are robust to natural distribution shif...

Domain Decorrelation with Potential Energy Ranking

Machine learning systems, especially the methods based on deep learning,...

MaskTune: Mitigating Spurious Correlations by Forcing to Explore

A fundamental challenge of over-parameterized deep learning models is le...

Please sign up or login with your details

Forgot password? Click here to reset