When Differential Privacy Meets Interpretability: A Case Study

06/24/2021
by   Rakshit Naidu, et al.
16

Given the increase in the use of personal data for training Deep Neural Networks (DNNs) in tasks such as medical imaging and diagnosis, differentially private training of DNNs is surging in importance and there is a large body of work focusing on providing better privacy-utility trade-off. However, little attention is given to the interpretability of these models, and how the application of DP affects the quality of interpretations. We propose an extensive study into the effects of DP training on DNNs, especially on medical imaging applications, on the APTOS dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset