Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes

by   Shruthi Chari, et al.

Medical experts may use Artificial Intelligence (AI) systems with greater trust if these are supported by contextual explanations that let the practitioner connect system inferences to their context of use. However, their importance in improving model usage and understanding has not been extensively studied. Hence, we consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions. We explore how relevant information for such dimensions can be extracted from Medical guidelines to answer typical questions from clinical practitioners. We identify this as a question answering (QA) task and employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability. Finally, we study the benefits of contextual explanations by building an end-to-end AI pipeline including data cohorting, AI risk modeling, post-hoc model explanations, and prototyped a visual dashboard to present the combined insights from different context dimensions and data sources, while predicting and identifying the drivers of risk of Chronic Kidney Disease - a common type-2 diabetes comorbidity. All of these steps were performed in engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel. We show that LLMs, in particular BERT and SciBERT, can be readily deployed to extract some relevant explanations to support clinical usage. To understand the value-add of the contextual explanations, the expert panel evaluated these regarding actionable insights in the relevant clinical setting. Overall, our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.


page 14

page 20


Leveraging Clinical Context for User-Centered Explainability: A Diabetes Use Case

Academic advances of AI models in high-precision domains, like healthcar...

An explainable XGBoost-based approach towards assessing the risk of cardiovascular disease in patients with Type 2 Diabetes Mellitus

Cardiovascular Disease (CVD) is an important cause of disability and dea...

Evaluation of Popular XAI Applied to Clinical Prediction Models: Can They be Trusted?

The absence of transparency and explainability hinders the clinical adop...

Explainable AI for Malnutrition Risk Prediction from m-Health and Clinical Data

Malnutrition is a serious and prevalent health problem in the older popu...

Process Knowledge-infused Learning for Suicidality Assessment on Social Media

Improving the performance and natural language explanations of deep lear...

Explaining an increase in predicted risk for clinical alerts

Much work aims to explain a model's prediction on a static input. We con...

Please sign up or login with your details

Forgot password? Click here to reset