OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

by   Fumiyuki Kato, et al.

Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.


page 1

page 2

page 3

page 4


FLAME: Differentially Private Federated Learning in the Shuffle Model

Differentially private federated learning has been intensively studied. ...

Differentially Private Federated Learning on Heterogeneous Data

Federated Learning (FL) is a paradigm for large-scale distributed learni...

Cluster Based Secure Multi-Party Computation in Federated Learning for Histopathology Images

Federated learning (FL) is a decentralized method enabling hospitals to ...

Training Production Language Models without Memorizing User Data

This paper presents the first consumer-scale next-word prediction (NWP) ...

One-shot Empirical Privacy Estimation for Federated Learning

Privacy auditing techniques for differentially private (DP) algorithms a...

Trade Privacy for Utility: A Learning-Based Privacy Pricing Game in Federated Learning

To prevent implicit privacy disclosure in sharing gradients among data o...

Can Public Large Language Models Help Private Cross-device Federated Learning?

We study (differentially) private federated learning (FL) of language mo...

Please sign up or login with your details

Forgot password? Click here to reset