On the Delta Method for Uncertainty Approximation in Deep Learning

12/02/2019
by   Geir K. Nilsen, et al.
0

The Delta method is a well known procedure used to quantify uncertainty in statistical models. The method has previously been applied in the context of neural networks, but has not reached much popularity in deep learning because of the sheer size of the Hessian matrix. In this paper, we propose a low cost variant of the method based on an approximate eigendecomposition of the positive curvature subspace of the Hessian matrix. The method has a computational complexity of O(KPN) time and O(KP) space, where K is the number of utilized Hessian eigenpairs, P is the number of model parameters and N is the number of training examples. Given that the model is L_2-regularized with rate λ/2, we provide a bound on the uncertainty approximation error given K. We show that when the smallest Hessian eigenvalue in the positive K/2-tail of the full spectrum, and the largest Hessian eigenvalue in the negative K/2-tail of the full spectrum are both approximately equal to λ, the error will be close to zero even when K≪ P . We demonstrate the method by a TensorFlow implementation, and show that meaningful rankings of images based on prediction uncertainty can be obtained for a convolutional neural network based MNIST classifier. We also observe that false positives have higher prediction uncertainty than true positives. This suggests that there is supplementing information in the uncertainty measure not captured by the probability alone.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset