Learning to Augment via Implicit Differentiation for Domain Generalization

10/25/2022
by   Tingwei Wang, et al.
0

Machine learning models are intrinsically vulnerable to domain shift between training and testing data, resulting in poor performance in novel domains. Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model. In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn. Different from existing data augmentation methods, our AugLearn views a data augmentation module as hyper-parameters of a classification model and optimizes the module together with the model via meta-learning. Specifically, at each training step, AugLearn (i) divides source domains into a pseudo source and a pseudo target set, and (ii) trains the augmentation module in such a way that the augmented (synthetic) images can make the model generalize well on the pseudo target set. Moreover, to overcome the expensive second-order gradient computation during meta-learning, we formulate an efficient joint training algorithm, for both the augmentation module and the classification model, based on the implicit function theorem. With the flexibility of augmenting data in both time and frequency spaces, AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.

READ FULL TEXT

page 5

page 10

research
06/13/2021

Domain Generalization on Medical Imaging Classification using Episodic Training with Task Augmentation

Medical imaging datasets usually exhibit domain shift due to the variati...
research
04/08/2021

Open Domain Generalization with Domain-Augmented Meta-Learning

Leveraging datasets available to learn a model with high generalization ...
research
12/27/2020

Domain Generalisation with Domain Augmented Supervised Contrastive Learning (Student Abstract)

Domain generalisation (DG) methods address the problem of domain shift, ...
research
01/20/2022

Domain Generalization via Frequency-based Feature Disentanglement and Interaction

Data out-of-distribution is a meta-challenge for all statistical learnin...
research
03/12/2021

Uncertainty-guided Model Generalization to Unseen Domains

We study a worst-case scenario in generalization: Out-of-domain generali...
research
02/05/2021

In-Loop Meta-Learning with Gradient-Alignment Reward

At the heart of the standard deep learning training loop is a greedy gra...
research
04/26/2023

Implicit Counterfactual Data Augmentation for Deep Neural Networks

Machine-learning models are prone to capturing the spurious correlations...

Please sign up or login with your details

Forgot password? Click here to reset