Rationalization for Explainable NLP: A Survey

01/21/2023
by   Sai Gurrapu, et al.
0

Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007-2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2020

A Survey of the State of Explainable AI for Natural Language Processing

Recent years have seen important advances in the quality of state-of-the...
research
03/20/2021

Local Interpretations for Explainable Natural Language Processing: A Survey

As the use of deep learning techniques has grown across various fields o...
research
01/03/2022

Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions

Recent natural language processing (NLP) techniques have accomplished hi...
research
07/08/2021

A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models

Bangla – ranked as the 6th most widely spoken language across the world ...
research
04/30/2020

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...
research
05/15/2023

It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance

Progress in NLP is increasingly measured through benchmarks; hence, cont...
research
05/19/2023

Deep Learning Approaches to Lexical Simplification: A Survey

Lexical Simplification (LS) is the task of replacing complex for simpler...

Please sign up or login with your details

Forgot password? Click here to reset