Towards Understanding Feature Learning in Out-of-Distribution Generalization

by   Yongqiang Chen, et al.

A common explanation for the failure of out-of-distribution (OOD) generalization is that the model trained with empirical risk minimization (ERM) learns spurious features instead of the desired invariant features. However, several recent studies challenged this explanation and found that deep networks may have already learned sufficiently good features for OOD generalization. The debate extends to the in-distribution and OOD performance correlations along with training or fine-tuning neural nets across a variety of OOD generalization tasks. To understand these seemingly contradicting phenomena, we conduct a theoretical investigation and find that ERM essentially learns both spurious features and invariant features. On the other hand, the quality of learned features during ERM pre-training significantly affects the final OOD performance, as OOD objectives rarely learn new features. Failing to capture all the underlying useful features during pre-training will further limit the final OOD performance. To remedy the issue, we propose Feature Augmented Training (FAT ), to enforce the model to learn all useful features by retaining the already learned features and augmenting new ones by multiple rounds. In each round, the retention and augmentation operations are performed on different subsets of the training data that capture distinct features. Extensive experiments show that FAT effectively learns richer features and consistently improves the OOD performance when applied to various objectives.


Invariant Risk Minimization

We introduce Invariant Risk Minimization (IRM), a learning paradigm to e...

On Feature Learning in the Presence of Spurious Correlations

Deep classifiers are known to rely on spurious features x2013 patterns w...

Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient for Out-of-Distribution Generalization

A common explanation for the failure of deep networks to generalize out-...

Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning

Recent studies show that even highly biased dense networks contain an un...

Towards Better Web Search Performance: Pre-training, Fine-tuning and Learning to Rank

This paper describes the approach of the THUIR team at the WSDM Cup 2023...

The Benefits of Mixup for Feature Learning

Mixup, a simple data augmentation method that randomly mixes two data po...

SFP: Spurious Feature-targeted Pruning for Out-of-Distribution Generalization

Model substructure learning aims to find an invariant network substructu...

Please sign up or login with your details

Forgot password? Click here to reset