PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

04/12/2020
by   Chenglin Yang, et al.
6

Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification. A limitation of current patch-based black-box attacks is that they perform poorly for targeted attacks, and even for the less challenging non-targeted scenarios, they require a large number of queries. Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks. PatchAttack induces misclassifications by superimposing small textured patches on the input image. We parametrize the appearance of these patches by a dictionary of class-specific textures. This texture dictionary is learned by clustering Gram matrices of feature activations from a VGG backbone. PatchAttack optimizes the position and texture parameters of each patch using reinforcement learning. Our experiments show that PatchAttack achieves > 99 architectures, while only manipulating 3 and 10 circumvents state-of-the-art adversarial defense methods successfully.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset