Going Far Boosts Attack Transferability, but Do Not Do It

02/20/2021
by   Sizhe Chen, et al.
0

Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples (AEs) with an imperceptible difference to original ones in human eyes. Also, the AEs from attacking one surrogate DNN tend to cheat other black-box DNNs as well, i.e., the attack transferability. Existing works reveal that adopting certain optimization algorithms in attack improves transferability, but the underlying reasons have not been thoroughly studied. In this paper, we investigate the impacts of optimization on attack transferability by comprehensive experiments concerning 7 optimization algorithms, 4 surrogates, and 9 black-box models. Through the thorough empirical analysis from three perspectives, we surprisingly find that the varied transferability of AEs from optimization algorithms is strongly related to the corresponding Root Mean Square Error (RMSE) from their original samples. On such a basis, one could simply approach high transferability by attacking until RMSE decreases, which motives us to propose a LArge RMSE Attack (LARA). Although LARA significantly improves transferability by 20 vulnerability of DNNs, leading to a natural urge that the strength of all attacks should be measured by both the widely used ℓ_∞ bound and the RMSE addressed in this paper, so that tricky enhancement of transferability would be avoided.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset