An Analysis Of Protected Health Information Leakage In Deep-Learning Based De-Identification Algorithms

01/28/2021
by   Salman Seyedi, et al.
5

The increasing complexity of algorithms for analyzing medical data, including de-identification tasks, raises the possibility that complex algorithms are learning not just the general representation of the problem, but specifics of given individuals within the data. Modern legal frameworks specifically prohibit the intentional or accidental distribution of patient data, but have not addressed this potential avenue for leakage of such protected health information. Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models. Recent research in the field has highlighted such issues in non-medical data, but all analysis is likely to be data and algorithm specific. We, therefore, chose to analyze a state-of-the-art free-text de-identification algorithm based on LSTM (Long Short-Term Memory) and its potential in encoding any individual in the training set. Using the i2b2 Challenge Data, we trained, then analyzed the model to assess whether the output of the LSTM, before the compression layer of the classifier, could be used to estimate the membership of the training data. Furthermore, we used different attacks including membership inference attack method to attack the model. Results indicate that the attacks could not identify whether members of the training data were distinguishable from non-members based on the model output. This indicates that the model does not provide any strong evidence into the identification of the individuals in the training data set and there is not yet empirical evidence it is unsafe to distribute the model for general use.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2019

Membership Encoding for Deep Learning

Machine learning as a service (MLaaS), and algorithm marketplaces are on...
research
06/17/2021

Privacy-Preserving Eye-tracking Using Deep Learning

The expanding usage of complex machine learning methods like deep learni...
research
11/18/2021

Enhanced Membership Inference Attacks against Machine Learning Models

How much does a given trained model leak about each individual data reco...
research
05/29/2019

Ultimate Power of Inference Attacks: Privacy Risks of High-Dimensional Models

Models leak information about their training data. This enables attacker...
research
09/18/2022

Distribution inference risks: Identifying and mitigating sources of leakage

A large body of work shows that machine learning (ML) models can leak se...
research
10/05/2019

Characterizing Membership Privacy in Stochastic Gradient Langevin Dynamics

Bayesian deep learning is recently regarded as an intrinsic way to chara...

Please sign up or login with your details

Forgot password? Click here to reset