Protecting User Privacy in Remote Conversational Systems: A Privacy-Preserving framework based on text sanitization

06/14/2023
by   Zhigang Kan, et al.
0

Large Language Models (LLMs) are gaining increasing attention due to their exceptional performance across numerous tasks. As a result, the general public utilize them as an influential tool for boosting their productivity while natural language processing researchers endeavor to employ them in solving existing or new research problems. Unfortunately, individuals can only access such powerful AIs through APIs, which ultimately leads to the transmission of raw data to the models' providers and increases the possibility of privacy data leakage. Current privacy-preserving methods for cloud-deployed language models aim to protect privacy information in the pre-training dataset or during the model training phase. However, they do not meet the specific challenges presented by the remote access approach of new large-scale language models. This paper introduces a novel task, "User Privacy Protection for Dialogue Models," which aims to safeguard sensitive user information from any possible disclosure while conversing with chatbots. We also present an evaluation scheme for this task, which covers evaluation metrics for privacy protection, data availability, and resistance to simulation attacks. Moreover, we propose the first framework for this task, namely privacy protection through text sanitization. Before sending the input to remote large models, it filters out the sensitive information, using several rounds of text sanitization based on privacy types that users define. Upon receiving responses from the larger model, our framework automatically restores privacy to ensure that the conversation goes smoothly, without intervention from the privacy filter. Experiments based on real-world datasets demonstrate the efficacy of our privacy-preserving approach against eavesdropping from potential attackers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2022

Planting and Mitigating Memorized Content in Predictive-Text Language Models

Language models are widely deployed to provide automatic text completion...
research
10/06/2021

Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep Learning

Deep learning-based Multi-Task Classification (MTC) is widely used in ap...
research
03/16/2021

No Intruder, no Validity: Evaluation Criteria for Privacy-Preserving Text Anonymization

For sensitive text data to be shared among NLP researchers and practitio...
research
06/03/2016

Using Neural Generative Models to Release Synthetic Twitter Corpora with Reduced Stylometric Identifiability of Users

We present a method for generating synthetic versions of Twitter data us...
research
10/13/2022

Mitigating Unintended Memorization in Language Models via Alternating Teaching

Recent research has shown that language models have a tendency to memori...
research
10/10/2020

Data-driven Regularized Inference Privacy

Data is used widely by service providers as input to inference systems t...
research
02/09/2023

Bag of Tricks for Training Data Extraction from Language Models

With the advance of language models, privacy protection is receiving mor...

Please sign up or login with your details

Forgot password? Click here to reset