BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis

by   Hejie Cui, et al.

Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience. GNNs are promising to model complicated network data, but they are prone to overfitting and suffer from poor interpretability, which prevents their usage in decision-critical scenarios like healthcare. To bridge this gap, we propose BrainNNExplainer, an interpretable GNN framework for brain network analysis. It is mainly composed of two jointly learned modules: a backbone prediction model that is specifically designed for brain networks and an explanation generator that highlights disease-specific prominent brain network connections. Extensive experimental results with visualizations on two challenging disease prediction datasets demonstrate the unique interpretability and outstanding performance of BrainNNExplainer.


Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis

Human brains lie at the core of complex neurobiological systems, where t...

FBNETGEN: Task-aware GNN-based fMRI Analysis via Functional Brain Network Generation

Functional magnetic resonance imaging (fMRI) is one of the most common i...

Deep Reinforcement Learning Guided Graph Neural Networks for Brain Network Analysis

Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) a...

Effective and Interpretable fMRI Analysis via Functional Brain Network Generation

Recent studies in neuroscience show great potential of functional brain ...

Towards better Interpretable and Generalizable AD detection using Collective Artificial Intelligence

Accurate diagnosis and prognosis of Alzheimer's disease are crucial for ...

Please sign up or login with your details

Forgot password? Click here to reset