Complex-valued deep learning with differential privacy

10/07/2021
by   Alexander Ziller, et al.
0

We present ζ-DP, an extension of differential privacy (DP) to complex-valued functions. After introducing the complex Gaussian mechanism, whose properties we characterise in terms of (ε, δ)-DP and Rényi-DP, we present ζ-DP stochastic gradient descent (ζ-DP-SGD), a variant of DP-SGD for training complex-valued neural networks. We experimentally evaluate ζ-DP-SGD on three complex-valued tasks, i.e. electrocardiogram classification, speech classification and magnetic resonance imaging (MRI) reconstruction. Moreover, we provide ζ-DP-SGD benchmarks for a large variety of complex-valued activation functions and on a complex-valued variant of the MNIST dataset. Our experiments demonstrate that DP training of complex-valued neural networks is possible with rigorous privacy guarantees and excellent utility.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro