BO-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization

06/04/2021
by   Zhuosheng Zhang, et al.
0

Decision-based attacks (DBA), wherein attackers perturb inputs to spoof learning algorithms by observing solely the output labels, are a type of severe adversarial attacks against Deep Neural Networks (DNNs) requiring minimal knowledge of attackers. State-of-the-art DBA attacks relying on zeroth-order gradient estimation require an excessive number of queries. Recently, Bayesian optimization (BO) has shown promising in reducing the number of queries in score-based attacks (SBA), in which attackers need to observe real-valued probability scores as outputs. However, extending BO to the setting of DBA is nontrivial because in DBA only output labels instead of real-valued scores, as needed by BO, are available to attackers. In this paper, we close this gap by proposing an efficient DBA attack, namely BO-DBA. Different from existing approaches, BO-DBA generates adversarial examples by searching so-called directions of perturbations. It then formulates the problem as a BO problem that minimizes the real-valued distortion of perturbations. With the optimized perturbation generation process, BO-DBA converges much faster than the state-of-the-art DBA techniques. Experimental results on pre-trained ImageNet classifiers show that BO-DBA converges within 200 queries while the state-of-the-art DBA techniques need over 15,000 queries to achieve the same level of perturbation distortion. BO-DBA also shows similar attack success rates even as compared to BO-based SBA attacks but with less distortion.

READ FULL TEXT

page 2

page 7

page 11

page 12

page 13

page 14

page 15

page 16

research
05/31/2021

QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities

Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversaria...
research
05/28/2020

QEBA: Query-Efficient Boundary-Based Blackbox Attack

Machine learning (ML), especially deep neural networks (DNNs) have been ...
research
08/12/2022

Unifying Gradients to Improve Real-world Robustness for Deep Networks

The wide application of deep neural networks (DNNs) demands an increasin...
research
08/05/2018

Structured Adversarial Attack: Towards General Implementation and Better Interpretability

When generating adversarial examples to attack deep neural networks (DNN...
research
05/24/2022

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

The score-based query attacks (SQAs) pose practical threats to deep neur...
research
04/09/2018

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
02/09/2023

Exploiting Certified Defences to Attack Randomised Smoothing

In guaranteeing that no adversarial examples exist within a bounded regi...

Please sign up or login with your details

Forgot password? Click here to reset