EW-Tune: A Framework for Privately Fine-Tuning Large Language Models with Differential Privacy

10/26/2022
by   Rouzbeh Behnia, et al.
0

Pre-trained Large Language Models (LLMs) are an integral part of modern AI that have led to breakthrough performances in complex AI tasks. Major AI companies with expensive infrastructures are able to develop and train these large models with billions and millions of parameters from scratch. Third parties, researchers, and practitioners are increasingly adopting these pre-trained models and fine-tuning them on their private data to accomplish their downstream AI tasks. However, it has been shown that an adversary can extract/reconstruct the exact training samples from these LLMs, which can lead to revealing personally identifiable information. The issue has raised deep concerns about the privacy of LLMs. Differential privacy (DP) provides a rigorous framework that allows adding noise in the process of training or fine-tuning LLMs such that extracting the training data becomes infeasible (i.e., with a cryptographically small success probability). While the theoretical privacy guarantees offered in most extant studies assume learning models from scratch through many training iterations in an asymptotic setting, this assumption does not hold in fine-tuning scenarios in which the number of training iterations is significantly smaller. To address the gap, we present , a DP framework for fine-tuning LLMs based on Edgeworth accountant with finite-sample privacy guarantees. Our results across four well-established natural language understanding (NLU) tasks show that while  adds privacy guarantees to LLM fine-tuning process, it directly contributes to decreasing the induced noise to up to 5.6% and improves the state-of-the-art LLMs performance by up to 1.1% across all NLU tasks. We have open-sourced our implementations for wide adoption and public testing purposes.

READ FULL TEXT
research
09/13/2020

Differentially Private Language Models Benefit from Public Pre-training

Language modeling is a keystone task in natural language processing. Whe...
research
09/30/2022

Differentially Private Bias-Term only Fine-tuning of Foundation Models

We study the problem of differentially private (DP) fine-tuning of large...
research
10/13/2021

Differentially Private Fine-tuning of Language Models

We give simpler, sparser, and faster algorithms for differentially priva...
research
08/30/2022

To Adapt or to Fine-tune: A Case Study on Abstractive Summarization

Recent advances in the field of abstractive summarization leverage pre-t...
research
07/14/2021

An Efficient DP-SGD Mechanism for Large Scale NLP Models

Recent advances in deep learning have drastically improved performance o...
research
04/14/2023

MedAlpaca – An Open-Source Collection of Medical Conversational AI Models and Training Data

As large language models (LLMs) like OpenAI's GPT series continue to mak...
research
03/06/2023

Dynamic Prompting: A Unified Framework for Prompt Tuning

It has been demonstrated that prompt tuning is highly effective in effic...

Please sign up or login with your details

Forgot password? Click here to reset