Estimating g-Leakage via Machine Learning

by   Marco Romanelli, et al.

This paper considers the problem of estimating the information leakage of a system in the black-box scenario. It is assumed that the system's internals are unknown to the learner, or anyway too complicated to analyze, and the only available information are pairs of input-output data samples, possibly obtained by submitting queries to the system or provided by a third party. Previous research has mainly focused on counting the frequencies to estimate the input-output conditional probabilities (referred to as frequentist approach), however this method is not accurate when the domain of possible outputs is large. To overcome this difficulty, the estimation of the Bayes error of the ideal classifier was recently investigated using Machine Learning (ML) models and it has been shown to be more accurate thanks to the ability of those models to learn the input-output correspondence. However, the Bayes vulnerability is only suitable to describe one-try attacks. A more general and flexible measure of leakage is the g-vulnerability, which encompasses several different types of adversaries, with different goals and capabilities. In this paper, we propose a novel approach to perform black-box estimation of the g-vulnerability using ML. A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms. First, we formally show the learnability for all data distributions. Then, we evaluate the performance via various experiments using k-Nearest Neighbors and Neural Networks. Our results outperform the frequentist approach when the observables domain is large.


page 1

page 2

page 3

page 4


F-BLEAU: Fast Black-box Leakage Estimation

We consider the problem of measuring how much a system reveals about its...

Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources

Current transfer learning methods are mainly based on finetuning a pretr...

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

Machine learning (ML) has progressed rapidly during the past decade and ...

Quantifying (Hyper) Parameter Leakage in Machine Learning

Black Box Machine Learning models leak information about the proprietary...

Stealing Black-Box Functionality Using The Deep Neural Tree Architecture

This paper makes a substantial step towards cloning the functionality of...

SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation

A black-box spectral method is introduced for evaluating the adversarial...

MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations

With the increasing popularity of deep neural networks (DNNs), it has re...

Please sign up or login with your details

Forgot password? Click here to reset