Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

09/17/2017
by   Xiaoyu Cao, et al.
0

Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars. In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.

READ FULL TEXT
research
05/15/2019

War: Detecting adversarial examples by pre-processing input data

Deep neural networks (DNNs) have demonstrated their outstanding performa...
research
04/15/2019

Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction

Deep Neural Networks (DNNs) have tremendous potential in advancing the v...
research
03/09/2021

Selective and Features based Adversarial Example Detection

Security-sensitive applications that relay on Deep Neural Networks (DNNs...
research
11/05/2019

DLA: Dense-Layer-Analysis for Adversarial Example Detection

In recent years Deep Neural Networks (DNNs) have achieved remarkable res...
research
06/01/2019

Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification

Deep neural networks (DNNs) have recently achieved state-of-the-art perf...
research
05/21/2022

Gradient Concealment: Free Lunch for Defending Adversarial Attacks

Recent studies show that the deep neural networks (DNNs) have achieved g...
research
07/01/2020

ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks

Deep Neural Networks (DNNs) have been applied successfully in computer v...

Please sign up or login with your details

Forgot password? Click here to reset