AuxBlocks: Defense Adversarial Example via Auxiliary Blocks

02/18/2019
by   Yueyao Yu, et al.
0

Deep learning models are vulnerable to adversarial examples, which poses an indisputable threat to their applications. However, recent studies observe gradient-masking defenses are self-deceiving methods if an attacker can realize this defense. In this paper, we propose a new defense method based on appending information. We introduce the Aux Block model to produce extra outputs as a self-ensemble algorithm and analytically investigate the robustness mechanism of Aux Block. We have empirically studied the efficiency of our method against adversarial examples in two types of white-box attacks, and found that even in the full white-box attack where an adversary can craft malicious examples from defense models, our method has a more robust performance of about 54.6 precision on Cifar10 dataset and 38.7 Another advantage of our method is that it is able to maintain the prediction accuracy of the classification model on clean images, and thereby exhibits its high potential in practical applications

READ FULL TEXT

page 2

page 4

page 7

research
04/10/2018

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Neural networks are known to be vulnerable to adversarial examples. In t...
research
02/22/2020

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

Recent studies have shown that deep learning models are vulnerable to sp...
research
08/25/2020

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses

Convolutional Neural Networks have been shown to be vulnerable to advers...
research
02/01/2019

The Efficacy of SHIELD under Different Threat Models

We study the efficacy of SHIELD in the face of alternative threat models...
research
08/31/2021

Morphence: Moving Target Defense Against Adversarial Examples

Robustness to adversarial examples of machine learning models remains an...
research
09/24/2018

On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial Subspaces

Recent studies have found that deep learning systems are vulnerable to a...
research
06/15/2022

Morphence-2.0: Evasion-Resilient Moving Target Defense Powered by Out-of-Distribution Detection

Evasion attacks against machine learning models often succeed via iterat...

Please sign up or login with your details

Forgot password? Click here to reset