Proper likelihood ratio based ROC curves for general binary classification problems

09/03/2018
by   M. Gasparini, et al.
0

Everybody writes that ROC curves, a very common tool in binary classification problems, should be optimal, and in particular concave, non-decreasing and above the 45-degree line. Everybody uses ROC curves, theoretical and especially empirical, which are not so. This work is an attempt to correct this schizophrenic behavior. Optimality stems from the Neyman-Pearson lemma, which prescribes using likelihood-ratio based ROC curves. Starting from there, we give the most general definition of a likelihood-ratio based classification procedure, which encompasses finite, continuous and even more complex data types. We point out a strict relationship with a general notion of concentration of two probability measures. We give some nontrivial examples of situations with non-monotone and non-continuous likelihood ratios. Finally, we propose the ROC curve of a likelihood ratio based Gaussian kernel flexible Bayes classifier as a proper default alternative to the usual empirical ROC curve.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro