Reverse Engineering ℓ_p attacks: A block-sparse optimization approach with recovery guarantees

03/09/2022
by   Darshan Thaker, et al.
5

Deep neural network-based classifiers have been shown to be vulnerable to imperceptible perturbations to their input, such as ℓ_p-bounded norm adversarial attacks. This has motivated the development of many defense methods, which are then broken by new attacks, and so on. This paper focuses on a different but related problem of reverse engineering adversarial attacks. Specifically, given an attacked signal, we study conditions under which one can determine the type of attack (ℓ_1, ℓ_2 or ℓ_∞) and recover the clean signal. We pose this problem as a block-sparse recovery problem, where both the signal and the attack are assumed to lie in a union of subspaces that includes one subspace per class and one subspace per attack type. We derive geometric conditions on the subspaces under which any attacked signal can be decomposed as the sum of a clean signal plus an attack. In addition, by determining the subspaces that contain the signal and the attack, we can also classify the signal and determine the attack type. Experiments on digit and face classification demonstrate the effectiveness of the proposed approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset