Adversarial Patch

12/27/2017
by   Tom B. Brown, et al.
0

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset