Attack Agnostic Adversarial Defense via Visual Imperceptible Bound

10/25/2020
by   Saheb Chhabra, et al.
16

The high susceptibility of deep learning algorithms against structured and unstructured perturbations has motivated the development of efficient adversarial defense algorithms. However, the lack of generalizability of existing defense algorithms and the high variability in the performance of the attack algorithms for different databases raises several questions on the effectiveness of the defense algorithms. In this research, we aim to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks. This bound is related to the visual appearance of an image, and we termed it as Visual Imperceptible Bound (VIB). To compute this bound, we propose a novel method that uses the database characteristics. The VIB is further used to measure the effectiveness of attack algorithms. The performance of the proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases on multiple attacks that include C&W (l_2) and DeepFool. The proposed defense model is not only able to increase the robustness against several attacks but also retain or improve the classification accuracy on an original clean test set. The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.

READ FULL TEXT
research
11/05/2020

Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty

Dataset bias is a problem in adversarial machine learning, especially in...
research
11/24/2018

Attention, Please! Adversarial Defense via Attention Rectification and Preservation

This study provides a new understanding of the adversarial attack proble...
research
04/22/2019

blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness

Turing test was originally proposed to examine whether machine's behavio...
research
05/04/2022

Rethinking Classifier and Adversarial Attack

Various defense models have been proposed to resist adversarial attack a...
research
02/07/2022

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Non-parametric two-sample tests (TSTs) that judge whether two sets of sa...
research
06/21/2020

With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Online Regression Models

With the rise of third parties in the machine learning pipeline, the ser...
research
03/09/2022

Reverse Engineering ℓ_p attacks: A block-sparse optimization approach with recovery guarantees

Deep neural network-based classifiers have been shown to be vulnerable t...

Please sign up or login with your details

Forgot password? Click here to reset