Hierarchical Distribution-Aware Testing of Deep Learning

05/17/2022
by   Wei Huang, et al.
20

With its growing use in safety/security-critical applications, Deep Learning (DL) has raised increasing concerns regarding its dependability. In particular, DL has a notorious problem of lacking robustness. Despite recent efforts made in detecting Adversarial Examples (AEs) via state-of-the-art attacking and testing methods, they are normally input distribution agnostic and/or disregard the perception quality of AEs. Consequently, the detected AEs are irrelevant inputs in the application context or unnatural/unrealistic that can be easily noticed by humans. This may lead to a limited effect on improving the DL model's dependability, as the testing budget is likely to be wasted on detecting AEs that are encountered very rarely in its real-life operations. In this paper, we propose a new robustness testing approach for detecting AEs that considers both the input distribution and the perceptual quality of inputs. The two considerations are encoded by a novel hierarchical mechanism. First, at the feature level, the input data distribution is extracted and approximated by data compression techniques and probability density estimators. Such quantified feature level distribution, together with indicators that are highly correlated with local robustness, are considered in selecting test seeds. Given a test seed, we then develop a two-step genetic algorithm for local test case generation at the pixel level, in which two fitness functions work alternatively to control the quality of detected AEs. Finally, extensive experiments confirm that our holistic approach considering hierarchical distributions at feature and pixel levels is superior to state-of-the-arts that either disregard any input distribution or only consider a single (non-hierarchical) distribution, in terms of not only the quality of detected AEs but also improving the overall robustness of the DL model under testing.

READ FULL TEXT

page 13

page 14

page 16

page 20

research
04/13/2021

Detecting Operational Adversarial Examples for Reliable Deep Learning

The utilisation of Deep Learning (DL) raises new challenges regarding it...
research
02/11/2021

RobOT: Robustness-Oriented Testing for Deep Learning Systems

Recently, there has been a significant growth of interest in applying so...
research
05/06/2021

Distribution Awareness for AI System Testing

As Deep Learning (DL) is continuously adopted in many safety critical ap...
research
08/25/2018

Guiding Deep Learning System Testing using Surprise Adequacy

Deep Learning (DL) systems are rapidly being adopted in safety and secur...
research
08/13/2020

Graph-Based Fuzz Testing for Deep Learning Inference Engine

Testing deep learning (DL) systems are increasingly crucial as the incre...
research
06/02/2021

Assessing the Reliability of Deep Learning Classifiers Through Robustness Evaluation and Operational Profiles

The utilisation of Deep Learning (DL) is advancing into increasingly mor...
research
12/10/2021

Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models

Deep learning (DL) models, especially those large-scale and high-perform...

Please sign up or login with your details

Forgot password? Click here to reset