Enhancing the Robustness of Prior Network in Out-of-Distribution Detection
With the recent surge of interests in deep neural networks, more real-world applications start to adopt it in practice. However, deep neural networks are known to have limited control over its prediction under unseen images. Such weakness can potentially threaten society and cause annoying consequences in real-world scenarios. In order to resolve such issue, a popular task called out-of-distribution detection was proposed, which aims at separating out-of-distribution images from in-distribution images. In this paper, we propose a perturbed prior network architecture, which can efficiently separate model-level uncertainty from data-level uncertainty via prior entropy. To further enhance the robustness of proposed entropy-based uncertainty measure, we propose a concentration perturbation algorithm, which adaptively adds noise to concentration parameters so that the in- and out-of-distribution images are better separable. Our method can directly rely on the pre-trained deep neural network without re-training it, and also requires no knowledge about the network architecture and out-of-distribution examples. Such simplicity makes our method more suitable for real-world AI applications. Through comprehensive experiments, our methods demonstrate its superiority by achieving state-of-the-art results on many datasets.
READ FULL TEXT