Memory Classifiers: Two-stage Classification for Robustness in Machine Learning

06/10/2022
by   Souradeep Dutta, et al.
0

The performance of machine learning models can significantly degrade under distribution shifts of the data. We propose a new method for classification which can improve robustness to distribution shifts, by combining expert knowledge about the “high-level" structure of the data with standard classifiers. Specifically, we introduce two-stage classifiers called memory classifiers. First, these identify prototypical data points – memories – to cluster the training data. This step is based on features designed with expert guidance; for instance, for image data they can be extracted using digital image processing algorithms. Then, within each cluster, we learn local classifiers based on finer discriminating features, via standard models like deep neural networks. We establish generalization bounds for memory classifiers. We illustrate in experiments that they can improve generalization and robustness to distribution shifts on image datasets. We show improvements which push beyond standard data augmentation techniques.

READ FULL TEXT

page 2

page 6

page 7

page 9

page 13

page 15

research
06/29/2020

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

We introduce three new robustness benchmarks consisting of naturally occ...
research
06/17/2019

The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers

Understanding if classifiers generalize to out-of-sample datasets is a c...
research
12/05/2019

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

Modern deep neural networks can achieve high accuracy when the training ...
research
09/22/2021

Robust Generalization of Quadratic Neural Networks via Function Identification

A key challenge facing deep learning is that neural networks are often n...
research
06/08/2021

Robust Generalization despite Distribution Shift via Minimum Discriminating Information

Training models that perform well under distribution shifts is a central...
research
07/05/2022

Generalization to translation shifts: a study in architectures and augmentations

We provide a detailed evaluation of various image classification archite...
research
02/20/2023

Take Me Home: Reversing Distribution Shifts using Reinforcement Learning

Deep neural networks have repeatedly been shown to be non-robust to the ...

Please sign up or login with your details

Forgot password? Click here to reset