Data-driven Regularized Inference Privacy

10/10/2020
by   Chong Xiao Wang, et al.
0

Data is used widely by service providers as input to inference systems to perform decision making for authorized tasks. The raw data however allows a service provider to infer other sensitive information it has not been authorized for. We propose a data-driven inference privacy preserving framework to sanitize data so as to prevent leakage of sensitive information that is present in the raw data, while ensuring that the sanitized data is still compatible with the service provider's legacy inference system. We develop an inference privacy framework based on the variational method and include maximum mean discrepancy and domain adaption as techniques to regularize the domain of the sanitized data to ensure its legacy compatibility. However, the variational method leads to weak privacy in cases where the underlying data distribution is hard to approximate. It may also face difficulties when handling continuous private variables. To overcome this, we propose an alternative formulation of the privacy metric using maximal correlation and we present empirical methods to estimate it. Finally, we develop a deep learning model as an example of the proposed inference privacy framework. Numerical experiments verify the feasibility of our approach.

READ FULL TEXT
research
04/01/2019

Maximal Information Leakage based Privacy Preserving Data Disclosure Mechanisms

It is often necessary to disclose training data to the public domain, wh...
research
08/28/2020

Data-driven control on encrypted data

We provide an efficient and private solution to the problem of encryptio...
research
03/02/2022

EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...
research
06/14/2023

Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization

Large Language Models (LLMs) are gaining increasing attention due to the...
research
09/06/2023

Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks

Deep learning classifiers are crucial in the age of artificial intellige...
research
04/07/2023

Adjustable Privacy using Autoencoder-based Learning Structure

Inference centers need more data to have a more comprehensive and benefi...
research
08/20/2020

NoPeek: Information leakage reduction to share activations in distributed deep learning

For distributed machine learning with sensitive data, we demonstrate how...

Please sign up or login with your details

Forgot password? Click here to reset