Visualizing the diversity of representations learned by Bayesian neural networks

by   Dennis Grinwald, et al.

Explainable artificial intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks. In this work, we investigate how XAI methods can be used for exploring and visualizing the diversity of feature representations learned by Bayesian neural networks (BNNs). Our goal is to provide a global understanding of BNNs by making their decision-making strategies a) visible and tangible through feature visualizations and b) quantitatively measurable with a distance measure learned by contrastive learning. Our work provides new insights into the posterior distribution in terms of human-understandable feature information with regard to the underlying decision-making strategies. Our main findings are the following: 1) global XAI methods can be applied to explain the diversity of decision-making strategies of BNN instances, 2) Monte Carlo dropout exhibits increased diversity in feature representations compared to the multimodal posterior approximation of MultiSWAG, 3) the diversity of learned feature representations highly correlates with the uncertainty estimates, and 4) the inter-mode diversity of the multimodal posterior decreases as the network width increases, while the intra-mode diversity increases. Our findings are consistent with the recent deep neural networks theory, providing additional intuitions about what the theory implies in terms of humanly understandable concepts.


page 4

page 6

page 8

page 9

page 12

page 13

page 14

page 15


Variational Bayesian Decision-making for Continuous Utilities

Bayesian decision theory outlines a rigorous framework for making optima...

Efficient Bayes Inference in Neural Networks through Adaptive Importance Sampling

Bayesian neural networks (BNNs) have received an increased interest in t...

Decision-Making in Reinforcement Learning

In this research work, probabilistic decision-making approaches are stud...

Interpretable Diversity Analysis: Visualizing Feature Representations In Low-Cost Ensembles

Diversity is an important consideration in the construction of robust ne...

On Feature Relevance Uncertainty: A Monte Carlo Dropout Sampling Approach

Understanding decisions made by neural networks is key for the deploymen...

Complexity for deep neural networks and other characteristics of deep feature representations

We define a notion of complexity, motivated by considerations of circuit...

Please sign up or login with your details

Forgot password? Click here to reset