Privacy Safe Representation Learning via Frequency Filtering Encoder

08/04/2022
by   Jonghu Jeong, et al.
0

Deep learning models are increasingly deployed in real-world applications. These models are often deployed on the server-side and receive user data in an information-rich representation to solve a specific task, such as image classification. Since images can contain sensitive information, which users might not be willing to share, privacy protection becomes increasingly important. Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image. It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns. However, in this work, we find that training a reconstruction attacker can successfully recover the original image of existing ARL methods. To this end, we introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain. Our experimental results reveal that our approach withstands reconstruction attacks while outperforming previous state-of-the-art methods regarding the privacy-utility trade-off. We further conduct a user study to qualitatively assess our defense of the reconstruction attack.

READ FULL TEXT
research
09/15/2022

CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models

As deep learning is now used in many real-world applications, research h...
research
08/28/2018

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning system...
research
06/14/2020

Adversarial representation learning for synthetic replacement of private attributes

The collection of large datasets allows for advanced analytics that can ...
research
06/08/2020

Privacy Adversarial Network: Representation Learning for Mobile Data Privacy

The remarkable success of machine learning has fostered a growing number...
research
06/23/2023

Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models

Natural language processing (NLP) models have become increasingly popula...
research
09/08/2023

FIVA: Facial Image and Video Anonymization and Anonymization Defense

In this paper, we present a new approach for facial anonymization in ima...
research
12/20/2020

DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks

Recent deep learning models have shown remarkable performance in image c...

Please sign up or login with your details

Forgot password? Click here to reset