Improving Transparency of Deep Neural Inference Process

03/13/2019
by   Hiroshi Kuwajima, et al.
6

Deep learning techniques are rapidly advanced recently, and becoming a necessity component for widespread systems. However, the inference process of deep learning is black-box, and not very suitable to safety-critical systems which must exhibit high transparency. In this paper, to address this black-box limitation, we develop a simple analysis method which consists of 1) structural feature analysis: lists of the features contributing to inference process, 2) linguistic feature analysis: lists of the natural language labels describing the visual attributes for each feature contributing to inference process, and 3) consistency analysis: measuring consistency among input data, inference (label), and the result of our structural and linguistic feature analysis. Our analysis is simplified to reflect the actual inference process for high transparency, whereas it does not include any additional black-box mechanisms such as LSTM for highly human readable results. We conduct experiments and discuss the results of our analysis qualitatively and quantitatively, and come to believe that our work improves the transparency of neural networks. Evaluated through 12,800 human tasks, 75 result of our feature analysis are consistent, and 70 inference (label) and result of our feature analysis are consistent. In addition to the evaluation of the proposed analysis, we find that our analysis also provide suggestions, or possible next actions such as expanding neural network complexity or collecting training data to improve a neural network.

READ FULL TEXT

page 1

page 4

page 6

page 8

research
11/27/2019

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

Deep Learning is a state-of-the-art technique to make inference on exten...
research
01/13/2019

Neural network gradient-based learning of black-box function interfaces

Deep neural networks work well at approximating complicated functions wh...
research
10/09/2017

Enhancing Transparency of Black-box Soft-margin SVM by Integrating Data-based Prior Information

The lack of transparency often makes the black-box models difficult to b...
research
10/31/2016

The Case for Temporal Transparency: Detecting Policy Change Events in Black-Box Decision Making Systems

Bringing transparency to black-box decision making systems (DMS) has bee...
research
06/04/2020

MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations

With the increasing popularity of deep neural networks (DNNs), it has re...
research
03/03/2022

Label-Free Explainability for Unsupervised Models

Unsupervised black-box models are challenging to interpret. Indeed, most...

Please sign up or login with your details

Forgot password? Click here to reset