State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems

03/22/2023
by   Junbin Fang, et al.
0

Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye, which poses a huge security threat to the computer vision systems based on deep learning. Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is being captured and converted to a binary image inside the vision system, when compared to digital adversarial attacks. In this paper, we focus on physical adversarial attacks and further classify them into invasive and non-invasive. Optical-based physical adversarial attack techniques (e.g. using light irradiation) belong to the non-invasive category. As the perturbations can be easily ignored by humans as the perturbations are very similar to the effects generated by a natural environment in the real world. They are highly invisibility and executable and can pose a significant or even lethal threats to real systems. This paper focuses on optical-based physical adversarial attack techniques for computer vision systems, with emphasis on the introduction and discussion of optical-based physical adversarial attack techniques.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

page 9

research
01/02/2018

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

Deep learning is at the heart of the current rise of machine learning an...
research
04/29/2022

Adversarial attacks on an optical neural network

Adversarial attacks have been extensively investigated for machine learn...
research
07/24/2023

Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

Camera-based autonomous systems that emulate human perception are increa...
research
03/02/2023

AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems

Vision-based perception modules are increasingly deployed in many applic...
research
07/20/2022

Illusionary Attacks on Sequential Decision Makers and Countermeasures

Autonomous intelligent agents deployed to the real-world need to be robu...
research
07/14/2023

On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks

Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as...
research
12/10/2021

Preemptive Image Robustification for Protecting Users against Man-in-the-Middle Adversarial Attacks

Deep neural networks have become the driving force of modern image recog...

Please sign up or login with your details

Forgot password? Click here to reset