Explaining Language Models' Predictions with High-Impact Concepts

05/03/2023
by   Ruochen Zhao, et al.
9

The emergence of large-scale pretrained language models has posed unprecedented challenges in deriving explanations of why the model has made some predictions. Stemmed from the compositional nature of languages, spurious correlations have further undermined the trustworthiness of NLP systems, leading to unreliable model explanations that are merely correlated with the output predictions. To encourage fairness and transparency, there exists an urgent demand for reliable explanations that allow users to consistently understand the model's behavior. In this work, we propose a complete framework for extending concept-based interpretability methods to NLP. Specifically, we propose a post-hoc interpretability method for extracting predictive high-level features (concepts) from the pretrained model's hidden layer activations. We optimize for features whose existence causes the output predictions to change substantially, generates a high impact. Moreover, we devise several evaluation metrics that can be universally applied. Extensive experiments on real and synthetic tasks demonstrate that our method achieves superior results on predictive impact, usability, and faithfulness compared to the baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2022

Concept-Based Explanations for Tabular Data

The interpretability of machine learning models has been an essential ar...
research
12/19/2022

Explanation Regeneration via Information Bottleneck

Explaining the black-box predictions of NLP models naturally and accurat...
research
05/11/2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

Transformer architectures are complex and their use in NLP, while it has...
research
05/27/2019

Analyzing the Interpretability Robustness of Self-Explaining Models

Recently, interpretable models called self-explaining models (SEMs) have...
research
06/18/2019

Model Explanations under Calibration

Explaining and interpreting the decisions of recommender systems are bec...
research
11/19/2022

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

This paper evaluates whether training a decision tree based on concepts ...
research
05/19/2023

CCGen: Explainable Complementary Concept Generation in E-Commerce

We propose and study Complementary Concept Generation (CCGen): given a c...

Please sign up or login with your details

Forgot password? Click here to reset