Human-in-the-loop model explanation via verbatim boundary identification in generated neighborhoods

by   Xianlong Zeng, et al.

The black-box nature of machine learning models limits their use in case-critical applications, raising faithful and ethical concerns that lead to trust crises. One possible way to mitigate this issue is to understand how a (mispredicted) decision is carved out from the decision boundary. This paper presents a human-in-the-loop approach to explain machine learning models using verbatim neighborhood manifestation. Contrary to most of the current eXplainable Artificial Intelligence (XAI) systems, which provide hit-or-miss approximate explanations, our approach generates the local decision boundary of the given instance and enables human intelligence to conclude the model behavior. Our method can be divided into three stages: 1) a neighborhood generation stage, which generates instances based on the given sample; 2) a classification stage, which yields classifications on the generated instances to carve out the local decision boundary and delineate the model behavior; and 3) a human-in-the-loop stage, which involves human to refine and explore the neighborhood of interest. In the generation stage, a generative model is used to generate the plausible synthetic neighbors around the given instance. After the classification stage, the classified neighbor instances provide a multifaceted understanding of the model behavior. Three intervention points are provided in the human-in-the-loop stage, enabling humans to leverage their own intelligence to interpret the model behavior. Several experiments on two datasets are conducted, and the experimental results demonstrate the potential of our proposed approach for boosting human understanding of the complex machine learning model.


XPROAX-Local explanations for text classification with progressive neighborhood approximation

The importance of the neighborhood for training a local surrogate model ...

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...

Human-in-the-loop Machine Learning: A Macro-Micro Perspective

Though technical advance of artificial intelligence and machine learning...

Applying Genetic Programming to Improve Interpretability in Machine Learning Models

Explainable Artificial Intelligence (or xAI) has become an important res...

Trucks Don't Mean Trump: Diagnosing Human Error in Image Analysis

Algorithms provide powerful tools for detecting and dissecting human bia...

Improving the Efficiency of Human-in-the-Loop Systems: Adding Artificial to Human Experts

Information systems increasingly leverage artificial intelligence (AI) a...

ChatGPT or Human? Detect and Explain. Explaining Decisions of Machine Learning Model for Detecting Short ChatGPT-generated Text

ChatGPT has the ability to generate grammatically flawless and seemingly...

Please sign up or login with your details

Forgot password? Click here to reset