Quantifying and Improving Transferability in Domain Generalization

06/07/2021
by   Guojun Zhang, et al.
0

Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world. Existing efforts mostly focus on building invariant features among source and target domains. Based on invariant features, a high-performing classifier on source domains could hopefully behave equally well on a target domain. In other words, the invariant features are transferable. However, in practice, there are no perfectly transferable features, and some algorithms seem to learn ”more transferable” features than others. How can we understand and quantify such transferability? In this paper, we formally define transferability that one can quantify and compute in domain generalization. We point out the difference and connection with common discrepancy measures between domains, such as total variation and Wasserstein distance. We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability. Empirically, we evaluate the transferability of the feature embeddings learned by existing algorithms for domain generalization. Surprisingly, we find that many algorithms are not quite learning transferable features, although few could still survive. In light of this, we propose a new algorithm for learning transferable features and test it over various benchmark datasets, including RotatedMNIST, PACS, Office-Home and WILDS-FMoW. Experimental results show that the proposed algorithm achieves consistent improvement over many state-of-the-art algorithms, corroborating our theoretical findings.

READ FULL TEXT

page 33

page 34

page 36

research
05/01/2023

Dynamic Transfer Learning across Graphs

Transferring knowledge across graphs plays a pivotal role in many high-s...
research
12/15/2022

Non-IID Transfer Learning on Graphs

Transfer learning refers to the transfer of knowledge or information fro...
research
02/09/2022

Agree to Disagree: Diversity through Disagreement for Better Transferability

Gradient-based learning algorithms have an implicit simplicity bias whic...
research
06/25/2020

Target Consistency for Domain Adaptation: when Robustness meets Transferability

Learning Invariant Representations has been successfully applied for rec...
research
02/17/2021

Transferability of Neural Network-based De-identification Systems

Methods and Materials: We investigated transferability of neural network...
research
05/12/2023

To transfer or not transfer: Unified transferability metric and analysis

In transfer learning, transferability is one of the most fundamental pro...
research
12/23/2019

How to Pick the Best Source Data? Measuring Transferability for Heterogeneous Domains

Given a set of source data with pre-trained classification models, how c...

Please sign up or login with your details

Forgot password? Click here to reset