Test Error Estimation after Model Selection Using Validation Error
When performing supervised learning with the model selected using validation error from sample splitting and cross validation, the minimum value of the validation error can be biased downward. We propose two simple methods that use the errors produced in the validating step to estimate the test error after model selection, and we focus on the situations where we select the model by minimizing the validation error and the randomized validation error. Our methods do not require model refitting, and the additional computational cost is negligible. In the setting of sample splitting, we show that, the proposed test error estimates have biases of size o(1/√(n)) under suitable assumptions. We also propose to use the bootstrap to construct confidence intervals for the test error based on this result. We apply our proposed methods to a number of simulations and examine their performance.
READ FULL TEXT