Differentially Private Attention Computation

by   Yeqi Gao, et al.

Large language models (LLMs) have had a profound impact on numerous aspects of daily life including natural language processing, content generation, research methodologies and so on. However, one crucial issue concerning the inference results of large language models is security and privacy. In many scenarios, the results generated by LLMs could possibly leak many confidential or copyright information. A recent beautiful and breakthrough work [Vyas, Kakade and Barak 2023] focus on such privacy issue of the LLMs from theoretical perspective. It is well-known that computing the attention matrix is one of the major task during the LLMs computation. Thus, how to give a provable privately guarantees of computing the attention matrix is an important research direction. Previous work [Alman and Song 2023, Brand, Song and Zhou 2023] have proposed provable tight result for fast computation of attention without considering privacy concerns. One natural mathematical formulation to quantity the privacy in theoretical computer science graduate school textbook is differential privacy. Inspired by [Vyas, Kakade and Barak 2023], in this work, we provide a provable result for showing how to differentially private approximate the attention matrix. From technique perspective, our result replies on a pioneering work in the area of differential privacy by [Alabi, Kothari, Tankala, Venkat and Zhang 2022].


page 1

page 2

page 3

page 4


Quantum Local Differential Privacy and Quantum Statistical Query Model

The problem of private learning has been extensively studied in classica...

Planting and Mitigating Memorized Content in Predictive-Text Language Models

Language models are widely deployed to provide automatic text completion...

The Limits of Word Level Differential Privacy

As the issues of privacy and trust are receiving increasing attention wi...

Differentially Private Model Compression

Recent papers have shown that large pre-trained language models (LLMs) s...

An Adaptive and Fast Convergent Approach to Differentially Private Deep Learning

With the advent of the era of big data, deep learning has become a preva...

Scaling up Differentially Private Deep Learning with Fast Per-Example Gradient Clipping

Recent work on Renyi Differential Privacy has shown the feasibility of a...

Photonic Differential Privacy with Direct Feedback Alignment

Optical Processing Units (OPUs) – low-power photonic chips dedicated to ...

Please sign up or login with your details

Forgot password? Click here to reset