Functional Network: A Novel Framework for Interpretability of Deep Neural Networks

05/24/2022
by   Ben Zhang, et al.
0

The layered structure of deep neural networks hinders the use of numerous analysis tools and thus the development of its interpretability. Inspired by the success of functional brain networks, we propose a novel framework for interpretability of deep neural networks, that is, the functional network. We construct the functional network of fully connected networks and explore its small-worldness. In our experiments, the mechanisms of regularization methods, namely, batch normalization and dropout, are revealed using graph theoretical analysis and topological data analysis. Our empirical analysis shows the following: (1) Batch normalization enhances model performance by increasing the global e ciency and the number of loops but reduces adversarial robustness by lowering the fault tolerance. (2) Dropout improves generalization and robustness of models by improving the functional specialization and fault tolerance. (3) The models with dierent regularizations can be clustered correctly according to their functional topological dierences, re ecting the great potential of the functional network and topological data analysis in interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2023

How to Use Dropout Correctly on Residual Networks with Batch Normalization

For the stable optimization of deep neural networks, regularization meth...
research
10/30/2019

Fault Tolerance of Neural Networks in Adversarial Settings

Artificial Intelligence systems require a through assessment of differen...
research
10/25/2022

Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation

Deep Neural Networks miss a principled model of their operation. A novel...
research
12/31/2020

Topological obstructions in neural networks learning

We apply methods of topological data analysis to loss functions to gain ...
research
06/12/2022

A Functional Information Perspective on Model Interpretation

Contemporary predictive models are hard to interpret as their deep nets ...
research
10/27/2019

Inherent Weight Normalization in Stochastic Neural Networks

Multiplicative stochasticity such as Dropout improves the robustness and...
research
12/27/2019

Emergence of Network Motifs in Deep Neural Networks

Network science can offer fundamental insights into the structural and f...

Please sign up or login with your details

Forgot password? Click here to reset