Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

12/03/2021
by   Tochukwu Idika, et al.
0

Transferability of adversarial samples became a serious concern due to their impact on the reliability of machine learning system deployments, as they find their way into many critical applications. Knowing factors that influence transferability of adversarial samples can assist experts to make informed decisions on how to build robust and reliable machine learning systems. The goal of this study is to provide insights on the mechanisms behind the transferability of adversarial samples through an attack-centric approach. This attack-centric perspective interprets how adversarial samples would transfer by assessing the impact of machine learning attacks (that generated them) on a given input dataset. To achieve this goal, we generated adversarial samples using attacker models and transferred these samples to victim models. We analyzed the behavior of adversarial samples on victim models and outlined four factors that can influence the transferability of adversarial samples. Although these factors are not necessarily exhaustive, they provide useful insights to researchers and practitioners of machine learning systems.

READ FULL TEXT

page 2

page 14

page 15

page 16

page 17

page 18

page 20

research
07/14/2019

Measuring the Transferability of Adversarial Examples

Adversarial examples are of wide concern due to their impact on the reli...
research
09/08/2018

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

Transferability captures the ability of an attack against a machine-lear...
research
03/17/2020

Adversarial Transferability in Wearable Sensor Systems

Machine learning has increasingly become the most used approach for infe...
research
08/23/2022

Transferability Ranking of Adversarial Examples

Adversarial examples can be used to maliciously and covertly change a mo...
research
03/19/2018

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Attacks against machine learning systems represent a growing threat as h...
research
06/27/2023

Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

Evasion attacks are a threat to machine learning models, where adversari...
research
05/31/2022

Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems

With the growing popularity of artificial intelligence and machine learn...

Please sign up or login with your details

Forgot password? Click here to reset