MMFNet: A Multi-modality MRI Fusion Network for Segmentation of Nasopharyngeal Carcinoma

12/25/2018
by   Huai Chen, et al.
8

Segmentation of nasopharyngeal carcinoma (NPC) from Magnetic Resonance (MRI) Images is a crucial procedure for radiotherapy to improve clinical outcome and reduce radiation-associated toxicity. It is a time-consuming and label-intensive work for radiologists to manually mark the boundary of NPC slice by slice. In addition, due to the complex anatomical structure of NPC, automatic algorithms based on single-modality MRI do not have enough capability to get accurate delineation. To address the problem of weak distinction between normal adjacent tissues and lesion region in one single modality MRI, we propose a multi-modality MRI fusion network (MMFNet) to take advantage of three modalities MRI to realize NPC's segmentation. The backbone is a multi-encoder-based network, which is composed with several modality-specific encoders and one single decoder. The skip connection layer is utilized to combine low-level features from different modalities MRI with high-level features. Additionally, a fusion block is proposed to effectively fuse features from multi-modality MRI. Specifically speaking, the fusion block firstly highlight informative features and regions of interest, and then these weighted features will by fused and be further refined by a residual fusion block. Moreover, a training strategy named self-transfer is proposed to initializing encoders for multi-encoder-based network, it can stimulate encoders to make full mining of specific modality MRI. Our proposed framework can effectively make use of multi-modality medical datasets and the proposed modules such as fusion block and self-transfer can easily generalize to other multi-modality-based tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset