The art of defense: letting networks fool the attacker

04/07/2021
by   Jinlai Zhang, et al.
0

Some deep neural networks are invariant to some input transformations, such as Pointnetis permutation invariant to the input point cloud. In this paper, we demonstrated this property can be powerful in the defense of gradient based attacks. Specifically, we apply random input transformation which is invariant to networks we want to defend. Extensive experiments demonstrate that the proposed scheme outperforms the SOTA defense methods, and breaking the attack accuracy into nearly zero.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2019

Adversarial Attack and Defense on Point Sets

Emergence of the utility of 3D point cloud data in critical vision tasks...
research
12/21/2020

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification

Trojan (backdoor) attack is a form of adversarial attack on deep neural ...
research
10/11/2020

IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function based Restoration

Point cloud is an important 3D data representation widely used in many e...
research
01/26/2020

Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

Gradient-based adversarial attacks on neural networks can be crafted in ...
research
05/28/2019

A Parameterized Perspective on Protecting Elections

We study the parameterized complexity of the optimal defense and optimal...
research
06/29/2023

Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features

Recent studies have demonstrated the susceptibility of deep neural netwo...
research
10/20/2021

Detecting Backdoor Attacks Against Point Cloud Classifiers

Backdoor attacks (BA) are an emerging threat to deep neural network clas...

Please sign up or login with your details

Forgot password? Click here to reset