Depth Uncertainty in Neural Networks

06/15/2020
by   Javier Antoran, et al.
0

Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to both evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines.

READ FULL TEXT

page 30

page 41

research
07/01/2021

On the Practicality of Deterministic Epistemic Uncertainty

A set of novel approaches for estimating epistemic uncertainty in deep n...
research
10/13/2020

Training independent subnetworks for robust prediction

Recent approaches to efficiently ensemble neural networks have shown tha...
research
03/16/2022

Layer Ensembles: A Single-Pass Uncertainty Estimation in Deep Learning for Segmentation

Uncertainty estimation in deep learning has become a leading research fi...
research
02/11/2021

The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing Platforms

Neural networks (NNs) lack measures of "reliability" estimation that wou...
research
05/16/2019

Joint Learning of Neural Networks via Iterative Reweighted Least Squares

In this paper, we introduce the problem of jointly learning feed-forward...
research
09/17/2022

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

In this paper, we advocate for two stages in a neural network's decision...

Please sign up or login with your details

Forgot password? Click here to reset