Robust inference with knockoffs
We consider the variable selection problem, which seeks to identify important variables influencing a response Y out of many candidate features X_1, ..., X_p. We wish to do so while offering finite-sample guarantees about the fraction of false positives - selected variables X_j that in fact have no effect on Y after the other features are known. When the number of features p is large (perhaps even larger than the sample size n), and we have no prior knowledge regarding the type of dependence between Y and X, the model-X knockoffs framework nonetheless allows us to select a model with a guaranteed bound on the false discovery rate, as long as the distribution of the feature vector X=(X_1,...,X_p) is exactly known. This model selection procedure operates by constructing "knockoff copies'" of each of the p features, which are then used as a control group to ensure that the model selection algorithm is not choosing too many irrelevant features. In this work, we study the practical setting where the distribution of X could only be estimated, rather than known exactly, and the knockoff copies of the X_j's are therefore constructed somewhat incorrectly. Our results, which are free of any modeling assumption whatsoever, show that the resulting model selection procedure incurs an inflation of the false discovery rate that is proportional to our errors in estimating the distribution of each feature X_j conditional on the remaining features {X_k:k≠ j}. The model-X knockoff framework is therefore robust to errors in the underlying assumptions on the distribution of X, making it an effective method for many practical applications, such as genome-wide association studies, where the underlying distribution on the features X_1,...,X_p is estimated accurately but not known exactly.
READ FULL TEXT