Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images

11/28/2021
by   Dvij Kalaria, et al.
0

With the rapid advancement and increased use of deep learning models in image identification, security becomes a major concern to their deployment in safety-critical systems. Since the accuracy and robustness of deep learning models are primarily attributed from the purity of the training samples, therefore the deep learning architectures are often susceptible to adversarial attacks. Adversarial attacks are often obtained by making subtle perturbations to normal images, which are mostly imperceptible to humans, but can seriously confuse the state-of-the-art machine learning models. What is so special in the slightest intelligent perturbations or noise additions over normal images that it leads to catastrophic classifications by the deep neural networks? Using statistical hypothesis testing, we find that Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations. In this paper, we show how CVAEs can be effectively used to detect adversarial attacks on image classification networks. We demonstrate our results over MNIST, CIFAR-10 dataset and show how our method gives comparable performance to the state-of-the-art methods in detecting adversaries while not getting confused with noisy images, where most of the existing methods falter.

READ FULL TEXT
research
08/29/2022

Towards Adversarial Purification using Denoising AutoEncoders

With the rapid advancement and increased use of deep learning models in ...
research
11/22/2019

Attack Agnostic Statistical Method for Adversarial Detection

Deep Learning based AI systems have shown great promise in various domai...
research
10/29/2020

Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?

We present a method for adversarial attack detection based on the inspec...
research
05/21/2018

Adversarial Attacks on Classification Models for Graphs

Deep learning models for graphs have achieved strong performance for the...
research
05/09/2022

Btech thesis report on adversarial attack detection and purification of adverserially attacked images

This is Btech thesis report on detection and purification of adverserial...
research
08/18/2022

Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

The security of deep learning (DL) systems is an extremely important fie...
research
10/03/2020

A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations

Art Media Classification problem is a current research area that has att...

Please sign up or login with your details

Forgot password? Click here to reset