Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

by   Vinitra Swamy, et al.

Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at


Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs

Neural networks are ubiquitous in applied machine learning for education...

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

The main objective of eXplainable Artificial Intelligence (XAI) is to pr...

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

Computer Vision, and hence Artificial Intelligence-based extraction of i...

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

The evaluation of explanation methods is a research topic that has not y...

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...

TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security

Despite AI's significant growth, its "black box" nature creates challeng...

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...

Please sign up or login with your details

Forgot password? Click here to reset