Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects

11/28/2018
by   Michael A. Alcorn, et al.
19

Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That is, we estimate the parameters of a 3D renderer that cause a target DNN to misbehave in response to the rendered image. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to OoD poses of well-known objects in ImageNet. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97 sensitive to slight pose perturbations. Importantly, adversarial poses transfer across models and datasets. We find that 99.9 misclassified by Inception-v3 also transfer to the AlexNet and ResNet-50 image classifiers trained on the same ImageNet dataset, respectively, and 75.5 transfer to the YOLOv3 object detector trained on MS COCO.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset