Handling Noisy Labels via One-Step Abductive Multi-Target Learning

11/25/2020
by   Yongquan Yang, et al.
13

Learning from noisy labels is an important concern because of the lack of accurate ground-truth labels in plenty of real-world scenarios. In practice, various approaches for this concern first make corrections corresponding to potentially noisy-labeled instances, and then update predictive model with information of the made corrections. However, in specific areas, such as medical histopathology whole slide image analysis (MHWSIA), it is often difficult or even impossible for experts to manually achieve the noisy-free ground-truth labels which leads to labels with heavy noise. This situation raises two more difficult problems: 1) the methodology of approaches making corrections corresponding to potentially noisy-labeled instances has limitations due to the heavy noise existing in labels; and 2) the appropriate evaluation strategy for validation/testing is unclear because of the great difficulty in collecting the noisy-free ground-truth labels. In this paper, we focus on alleviating these two problems. For the problem 1), we present a one-step abductive multi-target learning framework (OSAMTLF) that imposes a one-step logical reasoning upon machine learning via a multi-target learning procedure to abduct the predictions of the learning model to be subject to our prior knowledge. For the problem 2), we propose a logical assessment formula (LAF) that evaluates the logical rationality of the outputs of an approach by estimating the consistencies between the predictions of the learning model and the logical facts narrated from the results of the one-step logical reasoning of OSAMTLF. Applying OSAMTLF and LAF to the Helicobacter pylori (H. pylori) segmentation task in MHWSIA, we show that OSAMTLF is able to abduct the machine learning model achieving logically more rational predictions, which is beyond the capability of various state-of-the-art approaches for learning from noisy labels.

READ FULL TEXT

page 4

page 9

page 10

page 17

page 33

page 36

page 37

page 38

research
07/06/2023

Validation of the Practicability of Logical Assessment Formula for Evaluations with Inaccurate Ground-Truth Labels

Logical assessment formula (LAF) is a new theory proposed for evaluation...
research
10/20/2021

One-Step Abductive Multi-Target Learning with Diverse Noisy Samples

One-step abductive multi-target learning (OSAMTL) was proposed to handle...
research
10/22/2021

Logical Assessment Formula and its Principles for Evaluations without Accurate Ground-Truth Labels

Logical assessment formula (LAF) was proposed for evaluations without ac...
research
01/02/2023

In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise

The performance of the Deep Learning (DL) models depends on the quality ...
research
03/12/2019

Noisy Supervision for Correcting Misaligned Cadaster Maps Without Perfect Ground Truth Data

In machine learning the best performance on a certain task is achieved b...
research
07/10/2020

ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

Today's available datasets in the wild, e.g., from social media and open...
research
07/29/2020

Difficulty-aware Glaucoma Classification with Multi-Rater Consensus Modeling

Medical images are generally labeled by multiple experts before the fina...

Please sign up or login with your details

Forgot password? Click here to reset