Accuracy versus time frontiers of semi-supervised and self-supervised learning on medical images

07/18/2023
by   Zhe Huang, et al.
0

For many applications of classifiers to medical images, a trustworthy label for each image can be difficult or expensive to obtain. In contrast, images without labels are more readily available. Two major research directions both promise that additional unlabeled data can improve classifier performance: self-supervised learning pretrains useful representations on unlabeled data only, then fine-tunes a classifier on these representations via the labeled set; semi-supervised learning directly trains a classifier on labeled and unlabeled data simultaneously. Recent methods from both directions have claimed significant gains on non-medical tasks, but do not systematically assess medical images and mostly compare only to methods in the same direction. This study contributes a carefully-designed benchmark to help answer a practitioner's key question: given a small labeled dataset and a limited budget of hours to spend on training, what gains from additional unlabeled images are possible and which methods best achieve them? Unlike previous benchmarks, ours uses realistic-sized validation sets to select hyperparameters, assesses runtime-performance tradeoffs, and bridges two research fields. By comparing 6 semi-supervised methods and 5 self-supervised methods to strong labeled-only baselines on 3 medical datasets with 30-1000 labels per class, we offer insights to resource-constrained, results-focused practitioners: MixMatch, SimCLR, and BYOL represent strong choices that were not surpassed by more recent methods. After much effort selecting hyperparameters on one dataset, we publish settings that enable strong methods to perform well on new medical tasks within a few hours, with further search over dozens of hours delivering modest additional gains.

READ FULL TEXT
research
04/11/2023

Semi-Supervised Relational Contrastive Learning

Disease diagnosis from medical images via supervised learning is usually...
research
08/25/2022

Fix-A-Step: Effective Semi-supervised Learning from Uncurated Unlabeled Sets

Semi-supervised learning (SSL) promises gains in accuracy compared to tr...
research
03/14/2022

S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning

In computational pathology, we often face a scarcity of annotations and ...
research
03/17/2023

Robust Semi-Supervised Learning for Histopathology Images through Self-Supervision Guided Out-of-Distribution Scoring

Semi-supervised learning (semi-SL) is a promising alternative to supervi...
research
04/05/2022

Self-supervised learning – A way to minimize time and effort for precision agriculture?

Machine learning, satellites or local sensors are key factors for a sust...
research
04/19/2022

Revisiting Vicinal Risk Minimization for Partially Supervised Multi-Label Classification Under Data Scarcity

Due to the high human cost of annotation, it is non-trivial to curate a ...
research
05/19/2023

Productive Crop Field Detection: A New Dataset and Deep Learning Benchmark Results

In precision agriculture, detecting productive crop fields is an essenti...

Please sign up or login with your details

Forgot password? Click here to reset