Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection

10/29/2020
by   Yongwei Wang, et al.
1

Recently, generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos, promoting research on fake face detection. Though fake face forensics can achieve high detection accuracy, their anti-forensic counterparts are less investigated. Here we explore more imperceptible and transferable anti-forensics for fake face imagery detection based on adversarial attacks. Since facial and background regions are often smooth, even small perturbation could cause noticeable perceptual impairment in fake face images. Therefore it makes existing adversarial attacks ineffective as an anti-forensic method. Our perturbation analysis reveals the intuitive reason of the perceptual degradation issue when directly applying existing attacks. We then propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception. Simple yet effective, the proposed method can fool both deep learning and non-deep learning based forensic detectors, achieving higher attack success rate and significantly improved visual quality. Specially, when adversaries consider imperceptibility as a constraint, the proposed anti-forensic method can improve the average attack success rate by around 30% on fake face images over two baseline attacks. More imperceptible and more transferable, the proposed method raises new security concerns to fake face imagery detection. We have released our code for public use, and hopefully the proposed method can be further explored in related forensic applications as an anti-forensic benchmark.

READ FULL TEXT

page 2

page 6

research
01/09/2021

Exploring Adversarial Fake Images on Face Manifold

Images synthesized by powerful generative adversarial network (GAN) base...
research
08/26/2020

An End-to-End Attack on Text-based CAPTCHAs Based on Cycle-Consistent Generative Adversarial Network

As a widely deployed security scheme, text-based CAPTCHAs have become mo...
research
12/12/2019

Zooming into Face Forensics: A Pixel-level Analysis

The stunning progress in face manipulation methods has made it possible ...
research
03/29/2022

Exploring Frequency Adversarial Attacks for Face Forgery Detection

Various facial manipulation techniques have drawn serious public concern...
research
09/03/2023

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Malicious use of deepfakes leads to serious public concerns and reduces ...
research
06/22/2023

Evading Forensic Classifiers with Attribute-Conditioned Adversarial Faces

The ability of generative models to produce highly realistic synthetic f...
research
09/21/2020

DeepTag: Robust Image Tagging for DeepFake Provenance

In recent years, DeepFake is becoming a common threat to our society, du...

Please sign up or login with your details

Forgot password? Click here to reset