Attacking Object Detectors via Imperceptible Patches on Background

09/16/2018
by   Yuezun Li, et al.
0

Deep neural networks have been proven vulnerable against adversarial perturbations. Recent works succeeded to generate adversarial perturbations on either the entire image or on the target of interests to corrupt object detectors. In this paper, we investigate the vulnerability of object detectors from a new perspective --- adding minimal perturbations on small background patches outside of targets to fail the detection results. Our work focuses on attacking the common component in the state-of-the-art detectors (e.g. Faster R-CNN), Region Proposal Networks (RPNs). As the receptive fields generated by RPN is often larger than the proposals themselves, we propose a novel method to generate background perturbation patches, and show that the perturbations solely outside of the targets can severely damage the performance of multiple types of detectors by simultaneously decreasing the true positives and increasing the false positives. We demonstrate the efficacy of our method on 5 different state-of-the-art object detectors on MS COCO 2014 dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset