Causality Inspired Representation Learning for Domain Generalization

by   Fangrui Lv, et al.
Alibaba Group
Beijing Institute of Technology

Domain generalization (DG) is essentially an out-of-distribution problem, aiming to generalize the knowledge learned from multiple source domains to an unseen target domain. The mainstream is to leverage statistical models to model the dependence between data and labels, intending to learn representations independent of domain. Nevertheless, the statistical models are superficial descriptions of reality since they are only required to model dependence instead of the intrinsic causal mechanism. When the dependence changes with the target distribution, the statistic models may fail to generalize. In this regard, we introduce a general structural causal model to formalize the DG problem. Specifically, we assume that each input is constructed from a mix of causal factors (whose relationship with the label is invariant across domains) and non-causal factors (category-independent), and only the former cause the classification judgments. Our goal is to extract the causal factors from inputs and then reconstruct the invariant causal mechanisms. However, the theoretical idea is far from practical of DG since the required causal/non-causal factors are unobserved. We highlight that ideal causal factors should meet three basic properties: separated from the non-causal ones, jointly independent, and causally sufficient for the classification. Based on that, we propose a Causality Inspired Representation Learning (CIRL) algorithm that enforces the representations to satisfy the above properties and then uses them to simulate the causal factors, which yields improved generalization ability. Extensive experimental results on several widely used datasets verify the effectiveness of our approach.


page 8

page 14


CIParsing: Unifying Causality Properties into Multiple Human Parsing

Existing methods of multiple human parsing (MHP) apply statistical model...

Out-of-distribution Generalization with Causal Invariant Transformations

In real-world applications, it is important and desirable to learn a mod...

Learning Independent Causal Mechanisms

Independent causal mechanisms are a central concept in the study of caus...

Nonlinear Invariant Risk Minimization: A Causal Approach

Due to spurious correlations, machine learning systems often fail to gen...

Domain Generalization via Contrastive Causal Learning

Domain Generalization (DG) aims to learn a model that can generalize wel...

Inducing Causal Structure for Abstractive Text Summarization

The mainstream of data-driven abstractive summarization models tends to ...

Structural Granger CAUSALITY for IoT Digital Twin

In this foundational expository article on the application of Causality ...

Please sign up or login with your details

Forgot password? Click here to reset