Crowd disagreement of medical images is informative

06/21/2018
by   Veronika Cheplygina, et al.
0

Classifiers for medical image analysis are often trained with a single consensus label, based on combining the labels from experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at <https://figshare.com/s/5cbbce14647b66286544>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset