Evidence-aware multi-modal data fusion and its application to total knee replacement prediction

by   Xinwen Liu, et al.

Deep neural networks have been widely studied for predicting a medical condition, such as total knee replacement (TKR). It has shown that data of different modalities, such as imaging data, clinical variables and demographic information, provide complementary information and thus can improve the prediction accuracy together. However, the data sources of various modalities may not always be of high quality, and each modality may have only partial information of medical condition. Thus, predictions from different modalities can be opposite, and the final prediction may fail in the presence of such a conflict. Therefore, it is important to consider the reliability of each source data and the prediction output when making a final decision. In this paper, we propose an evidence-aware multi-modal data fusion framework based on the Dempster-Shafer theory (DST). The backbone models contain an image branch, a non-image branch and a fusion branch. For each branch, there is an evidence network that takes the extracted features as input and outputs an evidence score, which is designed to represent the reliability of the output from the current branch. The output probabilities along with the evidence scores from multiple branches are combined with the Dempster's combination rule to make a final prediction. Experimental results on the public OA initiative (OAI) dataset for the TKR prediction task show the superiority of the proposed fusion strategy on various backbone models.


page 1

page 2

page 3

page 4


Uncertainty-aware Multi-modal Learning via Cross-modal Random Network Prediction

Multi-modal learning focuses on training models by equally combining mul...

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction

Continuous dimensional emotion prediction is a challenging task where th...

Constructing multi-modality and multi-classifier radiomics predictive models through reliable classifier fusion

Radiomics aims to extract and analyze large numbers of quantitative feat...

Evidence fusion with contextual discounting for multi-modality medical image segmentation

As information sources are usually imperfect, it is necessary to take in...

Difficulty-aware Glaucoma Classification with Multi-Rater Consensus Modeling

Medical images are generally labeled by multiple experts before the fina...

Dynamically Improving Branch Prediction Accuracy Between Contexts

Branch prediction is a standard feature in most processors, significantl...

Fusing Modalities by Multiplexed Graph Neural Networks for Outcome Prediction in Tuberculosis

In a complex disease such as tuberculosis, the evidence for the disease ...

Please sign up or login with your details

Forgot password? Click here to reset