Causally Invariant Predictor with Shift-Robustness

07/05/2021
by   Xiangyu Zheng, et al.
0

This paper proposes an invariant causal predictor that is robust to distribution shift across domains and maximally reserves the transferable invariant information. Based on a disentangled causal factorization, we formulate the distribution shift as soft interventions in the system, which covers a wide range of cases for distribution shift as we do not make prior specifications on the causal structure or the intervened variables. Instead of imposing regularizations to constrain the invariance of the predictor, we propose to predict by the intervened conditional expectation based on the do-operator and then prove that it is invariant across domains. More importantly, we prove that the proposed predictor is the robust predictor that minimizes the worst-case quadratic loss among the distributions of all domains. For empirical learning, we propose an intuitive and flexible estimating method based on data regeneration and present a local causal discovery procedure to guide the regeneration step. The key idea is to regenerate data such that the regenerated distribution is compatible with the intervened graph, which allows us to incorporate standard supervised learning methods with the regenerated data. Experimental results on both synthetic and real data demonstrate the efficacy of our predictor in improving the predictive accuracy and robustness across domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2022

Invariant and Transportable Representations for Anti-Causal Domain Shifts

Real-world classification problems must contend with domain shift, the (...
research
05/04/2021

Robust Reconfigurable Intelligent Surfaces via Invariant Risk and Causal Representations

In this paper, the problem of robust reconfigurable intelligent surface ...
research
05/31/2023

An Invariant Learning Characterization of Controlled Text Generation

Controlled generation refers to the problem of creating text that contai...
research
01/15/2021

Harmonization and the Worst Scanner Syndrome

We show that for a wide class of harmonization/domain-invariance schemes...
research
08/04/2020

Out-of-Distribution Generalization with Maximal Invariant Predictor

Out-of-Distribution (OOD) generalization problem is a problem of seeking...
research
08/15/2022

A Unified Causal View of Domain Invariant Representation Learning

Machine learning methods can be unreliable when deployed in domains that...
research
07/20/2022

Learning Counterfactually Invariant Predictors

We propose a method to learn predictors that are invariant under counter...

Please sign up or login with your details

Forgot password? Click here to reset