A systematic framework for natural perturbations from videos

06/05/2019
by   Vaishaal Shankar, et al.
10

We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos. As part of this framework, we construct Imagenet-Video-Robust, a human-expert--reviewed dataset of 22,178 images grouped into 1,109 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset. We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16 and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions.

READ FULL TEXT

page 3

page 4

page 8

page 9

research
02/23/2021

Rethinking Natural Adversarial Examples for Classification Models

Recently, it was found that many real-world examples without intentional...
research
07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
research
03/28/2019

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

In this paper we establish rigorous benchmarks for image classifier robu...
research
09/02/2021

Real World Robustness from Systematic Noise

Systematic error, which is not determined by chance, often refers to the...
research
03/30/2021

Improving robustness against common corruptions with frequency biased models

CNNs perform remarkably well when the training and test distributions ar...
research
07/20/2021

Built-in Elastic Transformations for Improved Robustness

We focus on building robustness in the convolutions of neural visual cla...
research
03/09/2023

Identification of Systematic Errors of Image Classifiers on Rare Subgroups

Despite excellent average-case performance of many image classifiers, th...

Please sign up or login with your details

Forgot password? Click here to reset