Adversarial Patch

12/27/2017
by   Tom B. Brown, et al.
0

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

READ FULL TEXT

page 2

page 4

page 5

research
02/10/2021

Enhancing Real-World Adversarial Patches with 3D Modeling Techniques

Although many studies have examined adversarial examples in the real wor...
research
07/09/2023

Random Position Adversarial Patch for Vision Transformers

Previous studies have shown the vulnerability of vision transformers to ...
research
12/06/2018

Towards Hiding Adversarial Examples from Network Interpretation

Deep networks have been shown to be fooled rather easily using adversari...
research
10/03/2016

Rain structure transfer using an exemplar rain image for synthetic rain image generation

This letter proposes a simple method of transferring rain structures of ...
research
06/03/2021

A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks

Adversarial examples for neural network image classifiers are known to b...
research
05/05/2020

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial ex...
research
07/04/2017

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...

Please sign up or login with your details

Forgot password? Click here to reset