Enhancing the Interpretability of Deep Models in Heathcare Through Attention: Application to Glucose Forecasting for Diabetic People

09/08/2020
by   Maxime De Bois, et al.
0

The adoption of deep learning in healthcare is hindered by their "black box" nature. In this paper, we explore the RETAIN architecture for the task of glusose forecasting for diabetic people. By using a two-level attention mechanism, the recurrent-neural-network-based RETAIN model is interpretable. We evaluate the RETAIN model on the type-2 IDIAB and the type-1 OhioT1DM datasets by comparing its statistical and clinical performances against two deep models and three models based on decision trees. We show that the RETAIN model offers a very good compromise between accuracy and interpretability, being almost as accurate as the LSTM and FCN models while remaining interpretable. We show the usefulness of its interpretable nature by analyzing the contribution of each variable to the final prediction. It revealed that signal values older than one hour are not used by the RETAIN model for the 30-minutes ahead of time prediction of glucose. Also, we show how the RETAIN model changes its behavior upon the arrival of an event such as carbohydrate intakes or insulin infusions. In particular, it showed that the patient's state before the event is particularily important for the prediction. Overall the RETAIN model, thanks to its interpretability, seems to be a very promissing model for regression or classification tasks in healthcare.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2020

Interpreting Deep Glucose Predictive Models for Diabetic People Using RETAIN

Progress in the biomedical field through the use of deep learning is hin...
research
08/19/2016

RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

Accuracy and interpretability are two dominant features of successful pr...
research
06/01/2021

Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models

Due to their black-box and data-hungry nature, deep learning techniques ...
research
02/12/2020

HAN-ECG: An Interpretable Atrial Fibrillation Detection Model Using Hierarchical Attention Networks

Atrial fibrillation (AF) is one of the most prevalent cardiac arrhythmia...
research
05/22/2022

Learnable Visual Words for Interpretable Image Recognition

To interpret deep models' predictions, attention-based visual cues are w...
research
03/24/2020

TRACER: A Framework for Facilitating Accurate and Interpretable Analytics for High Stakes Applications

In high stakes applications such as healthcare and finance analytics, th...
research
11/07/2022

Performance and utility trade-off in interpretable sleep staging

Recent advances in deep learning have led to the development of models a...

Please sign up or login with your details

Forgot password? Click here to reset