In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise

01/02/2023
by   Asma Ahmed Hashmi, et al.
0

The performance of the Deep Learning (DL) models depends on the quality of labels. In some areas, the involvement of human annotators may lead to noise in the data. When these corrupted labels are blindly regarded as the ground truth (GT), DL models suffer from performance deficiency. This paper presents a method that aims to learn a confident model in the presence of noisy labels. This is done in conjunction with estimating the uncertainty of multiple annotators. We robustly estimate the predictions given only the noisy labels by adding entropy or information-based regularizer to the classifier network. We conduct our experiments on a noisy version of MNIST, CIFAR-10, and FMNIST datasets. Our empirical results demonstrate the robustness of our method as it outperforms or performs comparably to other state-of-the-art (SOTA) methods. In addition, we evaluated the proposed method on the curated dataset, where the noise type and level of various annotators depend on the input image style. We show that our approach performs well and is adept at learning annotators' confusion. Moreover, we demonstrate how our model is more confident in predicting GT than other baselines. Finally, we assess our approach for segmentation problem and showcase its effectiveness with experiments.

READ FULL TEXT

page 9

page 12

page 15

page 16

page 17

page 18

page 19

page 20

research
05/21/2018

Masking: A New Perspective of Noisy Supervision

It is important to learn classifiers under noisy labels due to their ubi...
research
07/10/2020

ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

Today's available datasets in the wild, e.g., from social media and open...
research
02/10/2019

Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion

The predictive performance of supervised learning algorithms depends on ...
research
06/02/2021

Survey Equivalence: A Procedure for Measuring Classifier Accuracy Against Human Labels

In many classification tasks, the ground truth is either noisy or subjec...
research
11/25/2020

Handling Noisy Labels via One-Step Abductive Multi-Target Learning

Learning from noisy labels is an important concern because of the lack o...
research
03/31/2021

CrowdTeacher: Robust Co-teaching with Noisy Answers Sample-specific Perturbations for Tabular Data

Samples with ground truth labels may not always be available in numerous...
research
04/18/2018

Co-sampling: Training Robust Networks for Extremely Noisy Supervision

Training robust deep networks is challenging under noisy labels. Current...

Please sign up or login with your details

Forgot password? Click here to reset