On Evaluating the Adversarial Robustness of Semantic Segmentation Models

06/25/2023
by   Levente Halmosi, et al.
0

Achieving robustness against adversarial input perturbation is an important and intriguing problem in machine learning. In the area of semantic image segmentation, a number of adversarial training approaches have been proposed as a defense against adversarial perturbation, but the methodology of evaluating the robustness of the models is still lacking, compared to image classification. Here, we demonstrate that, just like in image classification, it is important to evaluate the models over several different and hard attacks. We propose a set of gradient based iterative attacks and show that it is essential to perform a large number of iterations. We include attacks against the internal representations of the models as well. We apply two types of attacks: maximizing the error with a bounded perturbation, and minimizing the perturbation for a given level of error. Using this set of attacks, we show for the first time that a number of models in previous work that are claimed to be robust are in fact not robust at all. We then evaluate simple adversarial training algorithms that produce reasonably robust models even under our set of strong attacks. Our results indicate that a key design decision to achieve any robustness is to use only adversarial examples during training. However, this introduces a trade-off between robustness and accuracy.

READ FULL TEXT

page 5

page 6

page 12

research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
09/20/2019

Defending Against Physically Realizable Attacks on Image Classification

We study the problem of defending deep neural network approaches for ima...
research
04/24/2023

Evaluating Adversarial Robustness on Document Image Classification

Adversarial attacks and defenses have gained increasing interest on comp...
research
07/17/2017

Houdini: Fooling Deep Structured Prediction Models

Generating adversarial examples is a critical step for evaluating and im...
research
02/03/2021

Adversarially Robust Learning with Unknown Perturbation Sets

We study the problem of learning predictors that are robust to adversari...
research
02/05/2021

Optimal Transport as a Defense Against Adversarial Attacks

Deep learning classifiers are now known to have flaws in the representat...
research
03/14/2020

Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation

Adversarial training is promising for improving robustness of deep neura...

Please sign up or login with your details

Forgot password? Click here to reset