Privacy Meets Explainability: A Comprehensive Impact Benchmark

by   Saifullah Saifullah, et al.

Since the mid-10s, the era of Deep Learning (DL) has continued to this day, bringing forth new superlatives and innovations each year. Nevertheless, the speed with which these innovations translate into real applications lags behind this fast pace. Safety-critical applications, in particular, underlie strict regulatory and ethical requirements which need to be taken care of and are still active areas of debate. eXplainable AI (XAI) and privacy-preserving machine learning (PPML) are both crucial research fields, aiming at mitigating some of the drawbacks of prevailing data-hungry black-box models in DL. Despite brisk research activity in the respective fields, no attention has yet been paid to their interaction. This work is the first to investigate the impact of private learning techniques on generated explanations for DL-based models. In an extensive experimental analysis covering various image and time series datasets from multiple domains, as well as varying privacy techniques, XAI methods, and model architectures, the effects of private training on generated explanations are studied. The findings suggest non-negligible changes in explanations through the introduction of privacy. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this work is a first step towards overcoming the remaining hurdles for practically applicable AI in safety-critical domains.


page 1

page 7

page 9

page 10

page 11

page 12

page 13

page 16


Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods

In the last decade neural network have made huge impact both in industry...

Explainable Artificial Intelligence (XAI): An Engineering Perspective

The remarkable advancements in Deep Learning (DL) algorithms have fueled...

Evaluating Privacy-Preserving Machine Learning in Critical Infrastructures: A Case Study on Time-Series Classification

With the advent of machine learning in applications of critical infrastr...

Security and Privacy Issues in Deep Learning

With the development of machine learning, expectations for artificial in...

Explainable Deep Learning for Video Recognition Tasks: A Framework Recommendations

The popularity of Deep Learning for real-world applications is ever-grow...

From Private to Public: Benchmarking GANs in the Context of Private Time Series Classification

Deep learning has proven to be successful in various domains and for dif...

SoK: Privacy-Preserving Data Synthesis

As the prevalence of data analysis grows, safeguarding data privacy has ...

Please sign up or login with your details

Forgot password? Click here to reset