Quantifying Uncertainty from Different Sources in Deep Neural Networks for Image Classification

11/17/2020
by   Aria Khoshsirat, et al.
17

Quantifying uncertainty in a model's predictions is important as it enables, for example, the safety of an AI system to be increased by acting on the model's output in an informed manner. We cannot expect a system to be 100 accurate or perfect at its task, however, we can equip the system with some tools to inform us if it is not certain about a prediction. This way, a second check can be performed, or the task can be passed to a human specialist. This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields. Deep Neural Networks are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in DNNs is a challenging and yet on-going problem. Although there have been many efforts to equip NNs with tools to estimate uncertainty, such as Monte Carlo Dropout, most of the previous methods only focus on one of the three types of model, data or distributional uncertainty. In this paper we propose a complete framework to capture and quantify all of these three types of uncertainties in DNNs for image classification. This framework includes an ensemble of CNNs for model uncertainty, a supervised reconstruction auto-encoder to capture distributional uncertainty and using the output of activation functions in the last layer of the network, to capture data uncertainty. Finally we demonstrate the efficiency of our method on popular image datasets for classification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset