Calibration for Stratified Classification Models

10/31/2017
by   Chandler Zuo, et al.
0

In classification problems, sampling bias between training data and testing data is critical to the ranking performance of classification scores. Such bias can be both unintentionally introduced by data collection and intentionally introduced by the algorithm, such as under-sampling or weighting techniques applied to imbalanced data. When such sampling bias exists, using the raw classification score to rank observations in the testing data can lead to suboptimal results. In this paper, I investigate the optimal calibration strategy in general settings, and develop a practical solution for one specific sampling bias case, where the sampling bias is introduced by stratified sampling. The optimal solution is developed by analytically solving the problem of optimizing the ROC curve. For practical data, I propose a ranking algorithm for general classification models with stratified data. Numerical experiments demonstrate that the proposed algorithm effectively addresses the stratified sampling bias issue. Interestingly, the proposed method shows its potential applicability in two other machine learning areas: unsupervised learning and model ensembling, which can be future research topics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset