Real-World Adversarial Examples involving Makeup Application

09/04/2021
by   Chang-Sheng Lin, et al.
0

Deep neural networks have developed rapidly and have achieved outstanding performance in several tasks, such as image classification and natural language processing. However, recent studies have indicated that both digital and physical adversarial examples can fool neural networks. Face-recognition systems are used in various applications that involve security threats from physical adversarial examples. Herein, we propose a physical adversarial attack with the use of full-face makeup. The presence of makeup on the human face is a reasonable possibility, which possibly increases the imperceptibility of attacks. In our attack framework, we combine the cycle-adversarial generative network (cycle-GAN) and a victimized classifier. The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16. Our experimental results show that our attack can effectively overcome manual errors in makeup application, such as color and position-related errors. We also demonstrate that the approaches used to train the models can influence physical attacks; the adversarial perturbations crafted from the pre-trained model are affected by the corresponding training data.

READ FULL TEXT

page 1

page 3

page 4

page 5

research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
research
02/01/2019

Adversarial Example Generation

Deep Neural Networks have achieved remarkable success in computer vision...
research
03/26/2019

A geometry-inspired decision-based attack

Deep neural networks have recently achieved tremendous success in image ...
research
08/20/2021

Application of Adversarial Examples to Physical ECG Signals

This work aims to assess the reality and feasibility of the adversarial ...
research
09/29/2021

On Brightness Agnostic Adversarial Examples Against Face Recognition Systems

This paper introduces a novel adversarial example generation method agai...
research
02/10/2021

Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis

Deep neural networks (DNNs) have been applied in a wide range of applica...

Please sign up or login with your details

Forgot password? Click here to reset