Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images

02/08/2019
by   Sanjana Srivastava, et al.
9

The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. Ullman et al. 2016 show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of-the-art deep neural networks (DNNs), and are much more prominent in DNNs. We found many cases where DNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in DNNs. Our results thus reveal a new failure mode of DNNs that also affects humans to a much lesser degree. They expose how fragile DNN recognition ability is for natural images even without adversarial patterns being introduced. Bringing the robustness of DNNs in natural images to the human level remains an open challenge for the community.

READ FULL TEXT

page 2

page 4

page 13

page 14

page 15

page 16

page 17

page 18

research
11/18/2018

CIFAR10 to Compare Visual Recognition Performance between Deep Neural Networks and Humans

Visual object recognition plays an essential role in human daily life. T...
research
03/14/2022

Do DNNs trained on Natural Images acquire Gestalt Properties?

Under some circumstances, humans tend to perceive individual elements as...
research
01/05/2021

Understanding the Ability of Deep Neural Networks to Count Connected Components in Images

Humans can count very fast by subitizing, but slow substantially as the ...
research
12/07/2020

Sparse Fooling Images: Fooling Machine Perception through Unrecognizable Images

In recent years, deep neural networks (DNNs) have achieved equivalent or...
research
10/12/2020

On the Minimal Recognizable Image Patch

In contrast to human vision, common recognition algorithms often fail on...
research
08/27/2018

Generalisation in humans and deep neural networks

We compare the robustness of humans and current convolutional deep neura...
research
12/04/2022

Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks

Adversarial attacks can easily fool object recognition systems based on ...

Please sign up or login with your details

Forgot password? Click here to reset