Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

by   Neelanjana Pal, et al.

This work in progress paper introduces robustness verification for autoencoder-based regression neural network (NN) models, following state-of-the-art approaches for robustness verification of image classification NNs. Despite the ongoing progress in developing verification methods for safety and robustness in various deep neural networks (DNNs), robustness checking of autoencoder models has not yet been considered. We explore this open space of research and check ways to bridge the gap between existing DNN verification methods by extending existing robustness analysis methods for such autoencoder networks. While classification models using autoencoders work more or less similar to image classification NNs, the functionality of regression models is distinctly different. We introduce two definitions of robustness evaluation metrics for autoencoder-based regression models, specifically the percentage robustness and un-robustness grade. We also modified the existing Imagestar approach, adjusting the variables to take care of the specific input types for regression networks. The approach is implemented as an extension of NNV, then applied and evaluated on a dataset, with a case study experiment shown using the same dataset. As per the authors' understanding, this work in progress paper is the first to show possible reachability analysis of autoencoder-based NNs.


page 1

page 2

page 3

page 4


Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel

Deep Neural Network (DNN) is a widely used deep learning technique. How ...

Understanding Spatial Robustness of Deep Neural Networks

Deep Neural Networks (DNNs) are being deployed in a wide range of settin...

VPN: Verification of Poisoning in Neural Networks

Neural networks are successfully used in a variety of applications, many...

Model-Agnostic Reachability Analysis on Deep Neural Networks

Verification plays an essential role in the formal analysis of safety-cr...

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

Verifying robustness of neural networks given a specified threat model i...

Boosting Robustness Verification of Semantic Feature Neighborhoods

Deep neural networks have been shown to be vulnerable to adversarial att...

Incremental Verification of Neural Networks

Complete verification of deep neural networks (DNNs) can exactly determi...

Please sign up or login with your details

Forgot password? Click here to reset