Spatially transformed adversarial examples

01/08/2018
by   Chaowei Xiao, et al.
0

Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the L_p distance for penalizing perturbations. Researchers have explored different defense methods to defend against such adversarial attacks. While the effectiveness of L_p distance as a metric of perceptual quality remains an active research area, in this paper we will instead focus on a different type of perturbation, namely spatial transformation, as opposed to manipulating the pixel values directly as in prior works. Perturbations generated through spatial transformation could result in large L_p distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems. This potentially provides a new direction in adversarial example generation and the design of corresponding defenses. We visualize the spatial transformation based perturbation for different examples and show that our technique can produce realistic adversarial examples with smooth image deformation. Finally, we visualize the attention of deep networks with different types of adversarial examples to better understand how these examples are interpreted.

READ FULL TEXT

page 16

page 17

page 18

page 20

page 21

page 22

page 23

page 24

research
01/23/2021

Error Diffusion Halftoning Against Adversarial Examples

Adversarial examples contain carefully crafted perturbations that can fo...
research
05/06/2023

Reactive Perturbation Defocusing for Textual Adversarial Defense

Recent studies have shown that large pre-trained language models are vul...
research
12/16/2022

Adversarial Example Defense via Perturbation Grading Strategy

Deep Neural Networks have been widely used in many fields. However, stud...
research
03/08/2021

Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations

Studies show that neural networks are susceptible to adversarial attacks...
research
10/11/2018

Realistic Adversarial Examples in 3D Meshes

Highly expressive models such as deep neural networks (DNNs) have been w...
research
03/07/2021

Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain

Deep neural networks (DNNs) have been shown to be vulnerable against adv...
research
01/08/2018

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable ag...

Please sign up or login with your details

Forgot password? Click here to reset