Attention-based Saliency Maps Improve Interpretability of Pneumothorax Classification

03/03/2023
by   Alessandro Wollek, et al.
0

Purpose: To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency using the example of pneumothorax classification. Materials and Methods: In this retrospective study, ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData. Saliency maps were generated using transformer multimodal explainability and gradient-weighted class activation mapping (GradCAM). Classification performance was evaluated on the Chest X-Ray 14, VinBigData, and SIIM-ACR data sets using the area under the receiver operating characteristic curve analysis (AUC) and compared with convolutional neural networks (CNNs). The explainability methods were evaluated with positive/negative perturbation, sensitivity-n, effective heat ratio, intra-architecture repeatability and interarchitecture reproducibility. In the user study, three radiologists classified 160 CXRs with/without saliency maps for pneumothorax and rated their usefulness. Results: ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs 0.95 (95 0.842) on Chest X-Ray 14, 0.84 (95 0.760, 0.895) on VinBigData, and 0.85 (95 CI: 0.868, 0.882) on SIIM ACR. Both saliency map methods unveiled a strong bias toward pneumothorax tubes in the models. Radiologists found 47 attention-based saliency maps useful and 39 methods outperformed GradCAM on all metrics. Conclusion: ViTs performed similarly to CNNs in CXR classification, and their attention-based saliency maps were more useful to radiologists and outperformed GradCAM.

READ FULL TEXT

page 21

page 22

page 23

page 24

page 25

page 30

page 32

research
12/22/2021

Comparing radiologists' gaze and saliency maps generated by interpretability methods for chest x-rays

The interpretability of medical image analysis models is considered a ke...
research
02/15/2022

Gaze-Guided Class Activation Mapping: Leveraging Human Attention for Network Attention in Chest X-rays Classification

The increased availability and accuracy of eye-gaze tracking technology ...
research
08/17/2022

Data-Efficient Vision Transformers for Multi-Label Disease Classification on Chest Radiographs

Radiographs are a versatile diagnostic tool for the detection and assess...
research
06/09/2023

Exploring the Impact of Image Resolution on Chest X-ray Classification Performance

Deep learning models for image classification have often used a resoluti...
research
09/29/2020

Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach

Convolutional neural networks (CNNs) are commonly used for image classif...
research
03/08/2019

Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system

There is a growing interest in the automated analysis of chest X-Ray (CX...
research
06/09/2023

WindowNet: Learnable Windows for Chest X-ray Classification

Chest X-ray (CXR) images are commonly compressed to a lower resolution a...

Please sign up or login with your details

Forgot password? Click here to reset