Unknown Examples & Machine Learning Model Generalization

by   Yeounoh Chung, et al.

Over the past decades, researchers and ML practitioners have come up with better and better ways to build, understand and improve the quality of ML models, but mostly under the key assumption that the training data is distributed identically to the testing data. In many real-world applications, however, some potential training examples are unknown to the modeler, due to sample selection bias or, more generally, covariate shift, i.e., a distribution shift between the training and deployment stage. The resulting discrepancy between training and testing distributions leads to poor generalization performance of the ML model and hence biased predictions. We provide novel algorithms that estimate the number and properties of these unknown training examples---unknown unknowns. This information can then be used to correct the training set, prior to seeing any test data. The key idea is to combine species-estimation techniques with data-driven methods for estimating the feature values for the unknown unknowns. Experiments on a variety of ML models and datasets indicate that taking the unknown examples into account can yield a more robust ML model that generalizes better.


page 1

page 2

page 3

page 4


Towards out of distribution generalization for problems in mechanics

There has been a massive increase in research interest towards applying ...

Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models

Not all data are equal. Misleading or unnecessary data can critically hi...

Combining predictions from linear models when training and test inputs differ

Methods for combining predictions from different models in a supervised ...

BEDS-Bench: Behavior of EHR-models under Distributional Shift–A Benchmark

Machine learning has recently demonstrated impressive progress in predic...

MLOps: A Primer for Policymakers on a New Frontier in Machine Learning

This chapter is written with the Data Scientist or MLOps professional in...

Picket: Self-supervised Data Diagnostics for ML Pipelines

Data corruption is an impediment to modern machine learning deployments....

Robust Generalization despite Distribution Shift via Minimum Discriminating Information

Training models that perform well under distribution shifts is a central...

Please sign up or login with your details

Forgot password? Click here to reset