DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

02/05/2021
by   Chong Xiang, et al.
0

State-of-the-art object detectors are vulnerable to localized patch hiding attacks where an adversary introduces a small adversarial patch to make detectors miss the detection of salient objects. In this paper, we propose the first general framework for building provably robust detectors against the localized patch hiding attack called DetectorGuard. To start with, we propose a general approach for transferring the robustness from image classifiers to object detectors, which builds a bridge between robust image classification and robust object detection. We apply a provably robust image classifier to a sliding window over the image and aggregates robust window classifications at different locations for a robust object detection. Second, in order to mitigate the notorious trade-off between clean performance and provable robustness, we use a prediction pipeline in which we compare the outputs of a conventional detector and a robust detector for catching an ongoing attack. When no attack is detected, DetectorGuard outputs the precise bounding boxes predicted by the conventional detector to achieve a high clean performance; otherwise, DetectorGuard triggers an attack alert for security. Notably, our prediction strategy ensures that the robust detector incorrectly missing objects will not hurt the clean performance of DetectorGuard. Moreover, our approach allows us to formally prove the robustness of DetectorGuard on certified objects, i.e., it either detects the object or triggers an alert, against any patch hiding attacker. Our evaluation on the PASCAL VOC and MS COCO datasets shows that DetectorGuard has the almost same clean performance as conventional detectors, and more importantly, that DetectorGuard achieves the first provable robustness against localized patch hiding attacks.

READ FULL TEXT

page 2

page 17

research
06/20/2019

On Physical Adversarial Patches for Object Detection

In this paper, we demonstrate a physical adversarial patch attack agains...
research
12/08/2021

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

Object detection plays a key role in many security-critical systems. Adv...
research
04/26/2021

PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches

An adversarial patch can arbitrarily manipulate image pixels within a re...
research
09/30/2021

You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors

Blind spots or outright deceit can bedevil and deceive machine learning ...
research
07/13/2022

Adversarially-Aware Robust Object Detector

Object detection, as a fundamental computer vision task, has achieved a ...
research
01/21/2022

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Deep learning models have been shown to be vulnerable to recent backdoor...
research
09/06/2022

MACAB: Model-Agnostic Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World

Object detection is the foundation of various critical computer-vision t...

Please sign up or login with your details

Forgot password? Click here to reset