Note on Attacking Object Detectors with Adversarial Stickers

12/21/2017
by   Kevin Eykholt, et al.
0

Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.

READ FULL TEXT
research
07/20/2018

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...
research
07/28/2019

Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples

Deep learning classifiers are known to be vulnerable to adversarial exam...
research
12/07/2017

Adversarial Examples that Fool Detectors

An adversarial example is an example that has been adjusted to produce a...
research
07/31/2020

Physical Adversarial Attack on Vehicle Detector in the Carla Simulator

In this paper, we tackle the issue of physical adversarial examples for ...
research
12/26/2018

Practical Adversarial Attack Against Object Detector

In this paper, we proposed the first practical adversarial attacks again...
research
11/26/2020

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Physical adversarial examples for camera-based computer vision have so f...
research
10/09/2017

Standard detectors aren't (currently) fooled by physical adversarial stop signs

An adversarial example is an example that has been adjusted to produce t...

Please sign up or login with your details

Forgot password? Click here to reset