On the Trade-Off between Actionable Explanations and the Right to be Forgotten

08/30/2022
by   Martin Pawelczyk, et al.
8

As machine learning (ML) models are increasingly being deployed in high-stakes applications, policymakers have suggested tighter data protection regulations (e.g., GDPR, CCPA). One key principle is the "right to be forgotten" which gives users the right to have their data deleted. Another key principle is the right to an actionable explanation, also known as algorithmic recourse, allowing users to reverse unfavorable decisions. To date, it is unknown whether these two principles can be operationalized simultaneously. Therefore, we introduce and study the problem of recourse invalidation in the context of data deletion requests. More specifically, we theoretically and empirically analyze the behavior of popular state-of-the-art algorithms and demonstrate that the recourses generated by these algorithms are likely to be invalidated if a small number of data deletion requests (e.g., 1 or 2) warrant updates of the predictive model. For the setting of linear models and overparameterized neural networks – studied through the lens of neural tangent kernels (NTKs) – we suggest a framework to identify a minimal subset of critical training points which, when removed, maximize the fraction of invalidated recourses. Using our framework, we empirically show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms. Thus, our work raises fundamental questions about the compatibility of "the right to an actionable explanation" in the context of the "right to be forgotten" while also providing constructive insights on the determining factors of recourse robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2023

Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten

The Right to Explanation and the Right to be Forgotten are two important...
research
02/24/2020

Approximate Data Deletion from Machine Learning Models: Algorithms and Evaluations

Deleting data from a trained machine learning (ML) model is a critical t...
research
03/09/2020

Towards Probabilistic Verification of Machine Unlearning

Right to be forgotten, also known as the right to erasure, is the right ...
research
03/13/2022

Algorithmic Recourse in the Face of Noisy Human Responses

As machine learning (ML) models are increasingly being deployed in high-...
research
07/08/2020

Just in Time: Personal Temporal Insights for Altering Model Decisions

The interpretability of complex Machine Learning models is coming to be ...
research
12/09/2019

Machine Unlearning

Once users have shared their data online, it is generally difficult for ...
research
05/21/2023

Random Relabeling for Efficient Machine Unlearning

Learning algorithms and data are the driving forces for machine learning...

Please sign up or login with your details

Forgot password? Click here to reset