Universal Segmentation of 33 Anatomies

by   Pengbo Liu, et al.
Institute of Computing Technology, Chinese Academy of Sciences

In the paper, we present an approach for learning a single model that universally segments 33 anatomical structures, including vertebrae, pelvic bones, and abdominal organs. Our model building has to address the following challenges. Firstly, while it is ideal to learn such a model from a large-scale, fully-annotated dataset, it is practically hard to curate such a dataset. Thus, we resort to learn from a union of multiple datasets, with each dataset containing the images that are partially labeled. Secondly, along the line of partial labelling, we contribute an open-source, large-scale vertebra segmentation dataset for the benefit of spine analysis community, CTSpine1K, boasting over 1,000 3D volumes and over 11K annotated vertebrae. Thirdly, in a 3D medical image segmentation task, due to the limitation of GPU memory, we always train a model using cropped patches as inputs instead a whole 3D volume, which limits the amount of contextual information to be learned. To this, we propose a cross-patch transformer module to fuse more information in adjacent patches, which enlarges the aggregated receptive field for improved segmentation performance. This is especially important for segmenting, say, the elongated spine. Based on 7 partially labeled datasets that collectively contain about 2,800 3D volumes, we successfully learn such a universal model. Finally, we evaluate the universal model on multiple open-source datasets, proving that our model has a good generalization performance and can potentially serve as a solid foundation for downstream tasks.


DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

Due to the intensive cost of labor and expertise in annotating 3D medica...

MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset

Pretraining with large-scale 3D volumes has a potential for improving th...

Towards Robust Medical Image Segmentation on Small-Scale Data with Incomplete Labels

The data-driven nature of deep learning models for semantic segmentation...

HST-MRF: Heterogeneous Swin Transformer with Multi-Receptive Field for Medical Image Segmentation

The Transformer has been successfully used in medical image segmentation...

Learning from partially labeled data for multi-organ and tumor segmentation

Medical image benchmarks for the segmentation of organs and tumors suffe...

Incremental Learning for Multi-organ Segmentation with Partially Labeled Datasets

There exists a large number of datasets for organ segmentation, which ar...

Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation

Learning by imitation is one of the most significant abilities of human ...

Please sign up or login with your details

Forgot password? Click here to reset