Who's Afraid of Adversarial Transferability?

by   Ziv Katzir, et al.

Adversarial transferability, namely the ability of adversarial perturbations to simultaneously fool multiple learning models, has long been the "big bad wolf" of adversarial machine learning. Successful transferability-based attacks requiring no prior knowledge of the attacked model's parameters or training data have been demonstrated numerous times in the past, implying that machine learning models pose an inherent security threat to real-life systems. However, all of the research performed in this area regarded transferability as a probabilistic property and attempted to estimate the percentage of adversarial examples that are likely to mislead a target model given some predefined evaluation set. As a result, those studies ignored the fact that real-life adversaries are often highly sensitive to the cost of a failed attack. We argue that overlooking this sensitivity has led to an exaggerated perception of the transferability threat, when in fact real-life transferability-based attacks are quite unlikely. By combining theoretical reasoning with a series of empirical results, we show that it is practically impossible to predict whether a given adversarial example is transferable to a specific target model in a black-box setting, hence questioning the validity of adversarial transferability as a real-life attack tool for adversaries that are sensitive to the cost of a failed attack.


page 11

page 15

page 20

page 23


Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malici...

Measuring the Transferability of Adversarial Examples

Adversarial examples are of wide concern due to their impact on the reli...

Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis

Many machine learning applications can benefit from simulated data for s...

"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models

Machine learning models are now widely deployed in real-world applicatio...

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Attacks against machine learning systems represent a growing threat as h...

A Novel Framework for Threat Analysis of Machine Learning-based Smart Healthcare Systems

Smart healthcare systems (SHSs) are providing fast and efficient disease...

Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks

Transferability of adversarial examples is a key issue to study the secu...

Please sign up or login with your details

Forgot password? Click here to reset