Model-agnostic explainable artificial intelligence for object detection in image data

03/30/2023
by   Milad Moradi, et al.
0

Object detection is a fundamental task in computer vision, which has been greatly progressed through developing large and intricate deep learning models. However, the lack of transparency is a big challenge that may not allow the widespread adoption of these models. Explainable artificial intelligence is a field of research where methods are developed to help users understand the behavior, decision logics, and vulnerabilities of AI-based systems. Black-box explanation refers to explaining decisions of an AI system without having access to its internals. In this paper, we design and implement a black-box explanation method named Black-box Object Detection Explanation by Masking (BODEM) through adopting a new masking approach for AI-based object detection systems. We propose local and distant masking to generate multiple versions of an input image. Local masks are used to disturb pixels within a target object to figure out how the object detector reacts to these changes, while distant masks are used to assess how the detection model's decisions are affected by disturbing pixels outside the object. A saliency map is then created by estimating the importance of pixels through measuring the difference between the detection output before and after masking. Finally, a heatmap is created that visualizes how important pixels within the input image are to the detected objects. The experimentations on various object detection datasets and models showed that BODEM can be effectively used to explain the behavior of object detectors and reveal their vulnerabilities. This makes BODEM suitable for explaining and validating AI based object detection systems in black-box software testing scenarios. Furthermore, we conducted data augmentation experiments that showed local masks produced by BODEM can be used for further training the object detectors and improve their detection accuracy and robustness.

READ FULL TEXT

page 12

page 16

page 21

page 22

page 23

page 24

research
02/04/2020

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

Computer Vision, and hence Artificial Intelligence-based extraction of i...
research
05/05/2023

Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models

We examined whether embedding human attention knowledge into saliency-ba...
research
07/19/2023

TbExplain: A Text-based Explanation Method for Scene Classification Models with the Statistical Prediction Correction

The field of Explainable Artificial Intelligence (XAI) aims to improve t...
research
09/19/2022

A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models

The widespread use of black-box AI models has raised the need for algori...
research
02/15/2023

Silent Vulnerable Dependency Alert Prediction with Vulnerability Key Aspect Explanation

Due to convenience, open-source software is widely used. For beneficial ...
research
11/26/2021

Reinforcement Explanation Learning

Deep Learning has become overly complicated and has enjoyed stellar succ...

Please sign up or login with your details

Forgot password? Click here to reset