Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?

by   Max Lennon, et al.

Perturbation-based attacks, while not physically realizable, have been the main emphasis of adversarial machine learning (ML) research. Patch-based attacks by contrast are physically realizable, yet most work has focused on 2D domain with recent forays into 3D. Characterizing the robustness properties of patch attacks and their invariance to 3D pose is important, yet not fully elucidated, and is the focus of this paper. To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time. 2) We provide new insights into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles. These findings should collectively guide future design of 3D patch attacks and defenses.


page 3

page 5

page 8


Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks

We focus on the development of effective adversarial patch attacks and –...

Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks against Object Detection

Adversarial patch-based attacks aim to fool a neural network with an int...

Defending Backdoor Attacks on Vision Transformer via Patch Processing

Vision Transformers (ViTs) have a radically different architecture with ...

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

Preprocessing and outlier detection techniques have both been applied to...

Brightness-Restricted Adversarial Attack Patch

Adversarial attack patches have gained increasing attention due to their...

DAP: A Dynamic Adversarial Patch for Evading Person Detectors

In this paper, we present a novel approach for generating naturalistic a...

Carpet-bombing patch: attacking a deep network without usual requirements

Although deep networks have shown vulnerability to evasion attacks, such...

Please sign up or login with your details

Forgot password? Click here to reset