Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

03/09/2022
by   Xiao Yang, et al.
0

Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches, which raises security concerns about the deployed face recognition systems. However, it is still challenging to ensure the reproducibility for most attack algorithms under complex physical conditions, which leads to the lack of a systematic evaluation of the existing methods. It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world. To this end, we propose to simulate the complex transformations of faces in the physical world via 3D-face modeling, which serves as a digital counterpart of physical faces. The generic framework allows us to control different face variations and physical conditions to conduct reproducible evaluations comprehensively. With this digital simulator, we further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations. Extensive experiments validate that Face3DAdv can significantly improve the effectiveness of diverse physically realizable adversarial patches in both simulated and physical environments, against various white-box and black-box face recognition models.

READ FULL TEXT

page 12

page 14

page 22

research
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
research
05/07/2021

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Deep neural networks, particularly face recognition models, have been sh...
research
08/19/2018

GridFace: Face Rectification via Learning Local Homography Transformations

In this paper, we propose a method, called GridFace, to reduce facial ge...
research
10/15/2019

On adversarial patches: real-world attack on ArcFace-100 face recognition system

Recent works showed the vulnerability of image classifiers to adversaria...
research
01/11/2022

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

The majority of adversarial attack techniques perform well against deep ...
research
08/18/2021

Adversarial Relighting against Face Recognition

Deep face recognition (FR) has achieved significantly high accuracy on s...
research
06/08/2021

Simulated Adversarial Testing of Face Recognition Models

Most machine learning models are validated and tested on fixed datasets....

Please sign up or login with your details

Forgot password? Click here to reset