Rethinking Out-of-Distribution Detection From a Human-Centric Perspective

by   Yao Zhu, et al.

Out-Of-Distribution (OOD) detection has received broad attention over the years, aiming to ensure the reliability and safety of deep neural networks (DNNs) in real-world scenarios by rejecting incorrect predictions. However, we notice a discrepancy between the conventional evaluation vs. the essential purpose of OOD detection. On the one hand, the conventional evaluation exclusively considers risks caused by label-space distribution shifts while ignoring the risks from input-space distribution shifts. On the other hand, the conventional evaluation reward detection methods for not rejecting the misclassified image in the validation dataset. However, the misclassified image can also cause risks and should be rejected. We appeal to rethink OOD detection from a human-centric perspective, that a proper detection method should reject the case that the deep model's prediction mismatches the human expectations and adopt the case that the deep model's prediction meets the human expectations. We propose a human-centric evaluation and conduct extensive experiments on 45 classifiers and 8 test datasets. We find that the simple baseline OOD detection method can achieve comparable and even better performance than the recently proposed methods, which means that the development in OOD detection in the past years may be overestimated. Additionally, our experiments demonstrate that model selection is non-trivial for OOD detection and should be considered as an integral of the proposed method, which differs from the claim in existing works that proposed methods are universal across different models.


page 3

page 7

page 8

page 12

page 13

page 14


Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric Perspective

Out-of-distribution (OOD) detection methods assume that they have test g...

Discovering Distribution Shifts using Latent Space Representations

Rapid progress in representation learning has led to a proliferation of ...

iDECODe: In-distribution Equivariance for Conformal Out-of-distribution Detection

Machine learning methods such as deep neural networks (DNNs), despite th...

NICO++: Towards Better Benchmarking for Domain Generalization

Despite the remarkable performance that modern deep neural networks have...

The Re-Label Method For Data-Centric Machine Learning

In industry deep learning application, our manually labeled data has a c...

VulnDS: Top-k Vulnerable SME Detection System in Networked-Loans

Groups of small and medium enterprises (SMEs) can back each other to obt...

Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection

Improving the reliability of deployed machine learning systems often inv...

Please sign up or login with your details

Forgot password? Click here to reset