Revisiting Fine-Tuning Strategies for Self-supervised Medical Imaging Analysis

07/20/2023
by   Muhammad Osama Khan, et al.
0

Despite the rapid progress in self-supervised learning (SSL), end-to-end fine-tuning still remains the dominant fine-tuning strategy for medical imaging analysis. However, it remains unclear whether this approach is truly optimal for effectively utilizing the pre-trained knowledge, especially considering the diverse categories of SSL that capture different types of features. In this paper, we first establish strong contrastive and restorative SSL baselines that outperform SOTA methods across four diverse downstream tasks. Building upon these strong baselines, we conduct an extensive fine-tuning analysis across multiple pre-training and fine-tuning datasets, as well as various fine-tuning dataset sizes. Contrary to the conventional wisdom of fine-tuning only the last few layers of a pre-trained network, we show that fine-tuning intermediate layers is more effective, with fine-tuning the second quarter (25-50 network being optimal for contrastive SSL whereas fine-tuning the third quarter (50-75 de-facto standard of end-to-end fine-tuning, our best fine-tuning strategy, which fine-tunes a shallower network consisting of the first three quarters (0-75 Additionally, using these insights, we propose a simple yet effective method to leverage the complementary strengths of multiple SSL models, resulting in enhancements of up to 3.57 fine-tuning strategies not only enhance the performance of individual SSL models, but also enable effective utilization of the complementary strengths offered by multiple SSL models, leading to significant improvements in self-supervised medical imaging analysis.

READ FULL TEXT

page 1

page 4

page 7

research
02/24/2023

SGL-PT: A Strong Graph Learner with Graph Prompt Tuning

Recently, much exertion has been paid to design graph self-supervised me...
research
10/28/2021

RadBERT-CL: Factually-Aware Contrastive Learning For Radiology Report Classification

Radiology reports are unstructured and contain the imaging findings and ...
research
07/30/2022

Improving Fine-tuning of Self-supervised Models with Contrastive Initialization

Self-supervised learning (SSL) has achieved remarkable performance in pr...
research
07/04/2023

SelfFed: Self-supervised Federated Learning for Data Heterogeneity and Label Scarcity in IoMT

Self-supervised learning in federated learning paradigm has been gaining...
research
06/12/2023

Active Learning Guided Fine-Tuning for enhancing Self-Supervised Based Multi-Label Classification of Remote Sensing Images

In recent years, deep neural networks (DNNs) have been found very succes...
research
03/28/2020

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Pretrained models from self-supervision are prevalently used in fine-tun...
research
04/25/2023

Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations

Joint-embedding based learning (e.g., SimCLR, MoCo, DINO) and reconstruc...

Please sign up or login with your details

Forgot password? Click here to reset