Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning

by   AprilPyone MaungMaung, et al.

In this paper, we propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning. Various learnable encryption methods have been studied to protect the sensitive visual information of plain images, and some of them have been investigated to be robust enough against all existing attacks. However, previous attacks on image encryption focus only on traditional cryptanalytic attacks or reverse translation models, so these attacks cannot recover any visual information if a block-scrambling encryption step, which effectively destroys global information, is applied. Accordingly, in this paper, generative models are explored to evaluate whether such models can restore sensitive visual information from encrypted images for the first time. We first point out that encrypted images have some similarity with plain images in the embedding space. By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information. We implement the proposed attack in two ways by utilizing two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one. Experiments were carried out on the CelebA-HQ and ImageNet datasets. Results show that images reconstructed by the proposed method have perceptual similarities to plain images.


page 1

page 2

page 3

page 5

page 7

page 8

page 9


A Jigsaw Puzzle Solver-based Attack on Block-wise Image Encryption for Privacy-preserving DNNs

Privacy-preserving deep neural networks (DNNs) have been proposed for pr...

StyleGAN Encoder-Based Attack for Block Scrambled Face Images

In this paper, we propose an attack method to block scrambled face image...

Image camouflage based on Generative Model

To protect image contents, most existing encryption algorithms are desig...

Distribution Discrepancy Maximization for Image Privacy Preserving

With the rapid increase in online photo sharing activities, image obfusc...

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

Textual backdoor attacks pose a practical threat to existing systems, as...

Adversarial Test on Learnable Image Encryption

Data for deep learning should be protected for privacy preserving. Resea...

3D Textured Model Encryption via 3D Lu Chaotic Mapping

In the coming Virtual/Augmented Reality (VR/AR) era, 3D contents will be...

Please sign up or login with your details

Forgot password? Click here to reset