Black-box Adversarial Sample Generation Based on Differential Evolution

07/30/2020
by   Junyu Lin, et al.
0

Deep Neural Networks (DNNs) are being used in various daily tasks such as object detection, speech processing, and machine translation. However, it is known that DNNs suffer from robustness problems – perturbed inputs called adversarial samples leading to misbehaviors of DNNs. In this paper, we propose a black-box technique called Black-box Momentum Iterative Fast Gradient Sign Method (BMI-FGSM) to test the robustness of DNN models. The technique does not require any knowledge of the structure or weights of the target DNN. Compared to existing white-box testing techniques that require accessing model internal information such as gradients, our technique approximates gradients through Differential Evolution and uses approximated gradients to construct adversarial samples. Experimental results show that our technique can achieve 100 in generating adversarial samples to trigger misclassification, and over 95 success in generating samples to trigger misclassification to a specific target output label. It also demonstrates better perturbation distance and better transferability. Compared to the state-of-the-art black-box technique, our technique is more efficient. Furthermore, we conduct testing on the commercial Aliyun API and successfully trigger its misbehavior within a limited number of queries, demonstrating the feasibility of real-world black-box attack.

READ FULL TEXT

page 3

page 17

page 23

research
08/14/2017

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Deep neural networks (DNNs) are one of the most prominent technologies o...
research
02/12/2022

EREBA: Black-box Energy Testing of Adaptive Neural Networks

Recently, various Deep Neural Network (DNN) models have been proposed fo...
research
02/16/2020

REST: Performance Improvement of a Black Box Model via RL-based Spatial Transformation

In recent years, deep neural networks (DNN) have become a highly active ...
research
08/08/2022

Robust and Imperceptible Black-box DNN Watermarking Based on Fourier Perturbation Analysis and Frequency Sensitivity Clustering

Recently, more and more attention has been focused on the intellectual p...
research
06/04/2021

DOCTOR: A Simple Method for Detecting Misclassification Errors

Deep neural networks (DNNs) have shown to perform very well on large sca...
research
03/25/2020

Deep Networks as Logical Circuits: Generalization and Interpretation

Not only are Deep Neural Networks (DNNs) black box models, but also we f...
research
06/03/2021

DeepOpt: Scalable Specification-based Falsification of Neural Networks using Black-Box Optimization

Decisions made by deep neural networks (DNNs) have a tremendous impact o...

Please sign up or login with your details

Forgot password? Click here to reset