Delving into the Adversarial Robustness on Face Recognition

07/08/2020
by   Xiao Yang, et al.
10

Face recognition has recently made substantial progress and achieved high accuracy on standard benchmarks based on the development of deep convolutional neural networks (CNNs). However, the lack of robustness in deep CNNs to adversarial examples has raised security concerns to enormous face recognition applications. To facilitate a better understanding of the adversarial vulnerability of the existing face recognition models, in this paper we perform comprehensive robustness evaluations, which can be applied as reference for evaluating the robustness of subsequent works on face recognition. We investigate 15 popular face recognition models and evaluate their robustness by using various adversarial attacks as an important surrogate. These evaluations are conducted under diverse adversarial settings, including dodging and impersonation attacks, ℓ_2 and ℓ_∞ attacks, white-box and black-box attacks. We further propose a landmark-guided cutout (LGC) attack method to improve the transferability of adversarial examples for black-box attacks, by considering the special characteristics of face recognition. Based on our evaluations, we draw several important findings, which are crucial for understanding the adversarial robustness and providing insights for future research on face recognition. Code is available at <https://github.com/ShawnXYang/Face-Robustness-Benchmark>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset