ACE: Active Learning for Causal Inference with Expensive Experiments

06/13/2023
by   Difan Song, et al.
0

Experiments are the gold standard for causal inference. In many applications, experimental units can often be recruited or chosen sequentially, and the adaptive execution of such experiments may offer greatly improved inference of causal quantities over non-adaptive approaches, particularly when experiments are expensive. We thus propose a novel active learning method called ACE (Active learning for Causal inference with Expensive experiments), which leverages Gaussian process modeling of the conditional mean functions to guide an informed sequential design of costly experiments. In particular, we develop new acquisition functions for sequential design via the minimization of the posterior variance of a desired causal estimand. Our approach facilitates targeted learning of a variety of causal estimands, such as the average treatment effect (ATE), the average treatment effect on the treated (ATTE), and individualized treatment effects (ITE), and can be used for adaptive selection of an experimental unit and/or the applied treatment. We then demonstrate in a suite of numerical experiments the improved performance of ACE over baseline methods for estimating causal estimands given a limited number of experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset