Advancing 3D Medical Image Analysis with Variable Dimension Transform based Supervised 3D Pre-training

01/05/2022
by   Shu Zhang, et al.
7

The difficulties in both data acquisition and annotation substantially restrict the sample sizes of training datasets for 3D medical imaging applications. As a result, constructing high-performance 3D convolutional neural networks from scratch remains a difficult task in the absence of a sufficient pre-training parameter. Previous efforts on 3D pre-training have frequently relied on self-supervised approaches, which use either predictive or contrastive learning on unlabeled data to build invariant 3D representations. However, because of the unavailability of large-scale supervision information, obtaining semantically invariant and discriminative representations from these learning frameworks remains problematic. In this paper, we revisit an innovative yet simple fully-supervised 3D network pre-training framework to take advantage of semantic supervisions from large-scale 2D natural image datasets. With a redesigned 3D network architecture, reformulated natural images are used to address the problem of data scarcity and develop powerful 3D representations. Comprehensive experiments on four benchmark datasets demonstrate that the proposed pre-trained models can effectively accelerate convergence while also improving accuracy for a variety of 3D medical imaging tasks such as classification, segmentation and detection. In addition, as compared to training from scratch, it can save up to 60 On the NIH DeepLesion dataset, it likewise achieves state-of-the-art detection performance, outperforming earlier self-supervised and fully-supervised pre-training approaches, as well as methods that do training from scratch. To facilitate further development of 3D medical models, our code and pre-trained model weights are publicly available at https://github.com/urmagicsmine/CSPR.

READ FULL TEXT

page 1

page 3

page 8

research
08/12/2021

A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

Transfer learning from supervised ImageNet models has been frequently us...
research
08/15/2023

Enhancing Network Initialization for Medical AI Models Using Large-Scale, Unlabeled Natural Images

Pre-training datasets, like ImageNet, have become the gold standard in m...
research
03/25/2023

MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation

The medical imaging community generates a wealth of datasets, many of wh...
research
06/27/2022

Reducing Annotation Need in Self-Explanatory Models for Lung Nodule Diagnosis

Feature-based self-explanatory methods explain their classification in t...
research
07/27/2023

vox2vec: A Framework for Self-supervised Contrastive Learning of Voxel-level Representations in Medical Images

This paper introduces vox2vec - a contrastive method for self-supervised...
research
08/10/2023

Surface Masked AutoEncoder: Self-Supervision for Cortical Imaging Data

Self-supervision has been widely explored as a means of addressing the l...
research
11/16/2022

Prompt Tuning for Parameter-efficient Medical Image Segmentation

Neural networks pre-trained on a self-supervision scheme have become the...

Please sign up or login with your details

Forgot password? Click here to reset