Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning

by   Hazem Fahmy, et al.

Deep neural networks (DNNs) are increasingly critical in modern safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components. The machine learning literature suggests one should trust DNNs demonstrating high accuracy on test sets. In case of low accuracy, DNNs should be retrained using additional inputs similar to the error-inducing ones. We observe two major challenges with existing practices for safety-critical systems: (1) scenarios that are underrepresented in the test set may represent serious risks, which may lead to safety violations, and may not be noticed; (2) debugging DNNs is poorly supported when error causes are difficult to visually detect. To address these problems, we propose HUDD, an approach that automatically supports the identification of root causes for DNN errors. We automatically group error-inducing images whose results are due to common subsets of selected DNN neurons. HUDD identifies root causes by applying a clustering algorithm to matrices (i.e., heatmaps) capturing the relevance of every DNN neuron on the DNN outcome. Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters. We have evaluated HUDD with DNNs from the automotive domain. The approach was able to automatically identify all the distinct root causes of DNN errors, thus supporting safety analysis. Also, our retraining approach has shown to be more effective at improving DNN accuracy than existing approaches.


HUDD: A tool to debug DNNs for safety analysis

We present HUDD, a tool that supports safety analysis practices for syst...

Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

Deep neural networks (DNNs) have demonstrated superior performance over ...

Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems

When Deep Neural Networks (DNNs) are used in safety-critical systems, en...

DeepLocalize: Fault Localization for Deep Neural Networks

Deep neural networks (DNNs) are becoming an integral part of most softwa...

TEASMA: A Practical Approach for the Test Assessment of Deep Neural Networks using Mutation Analysis

Successful deployment of Deep Neural Networks (DNNs), particularly in sa...

DNN Explanation for Safety Analysis: an Empirical Evaluation of Clustering-based Approaches

The adoption of deep neural networks (DNNs) in safety-critical contexts ...

A Framework for Assurance of Medication Safety using Machine Learning

Medication errors continue to be the leading cause of avoidable patient ...

Please sign up or login with your details

Forgot password? Click here to reset