Model-robust and efficient inference for cluster-randomized experiments

10/13/2022
by   Bingkai Wang, et al.
0

Cluster-randomized experiments are increasingly used to evaluate interventions in routine practice conditions, and researchers often adopt model-based methods with covariate adjustment in the statistical analyses. However, the validity of model-based covariate adjustment is unclear when the working models are misspecified, leading to ambiguity of estimands and risk of bias. In this article, we first adapt two conventional model-based methods, generalized estimating equations and linear mixed models, with weighted g-computation to achieve robust inference for cluster- and individual-average treatment effects. Furthermore, we propose an efficient estimator for each estimand that allows for flexible covariate adjustment and additionally addresses cluster size variation dependent on treatment assignment and other cluster characteristics. Such cluster size variations often occur post-randomization and can lead to bias of model-based methods if ignored. For our proposed method, we show that when the nuisance functions are consistently estimated by machine learning algorithms, the estimator is consistent, asymptotically normal, and efficient. When the nuisance functions are estimated via parametric working models, it remains triply-robust. Simulation studies and the analysis of a recent cluster-randomized experiment demonstrate that the proposed methods are superior to existing alternatives.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset