A Unified and Biologically-Plausible Relational Graph Representation of Vision Transformers

05/20/2022
by   Yuzhong Chen, et al.
0

Vision transformer (ViT) and its variants have achieved remarkable successes in various visual tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically-plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key sub-graphs: aggregation graph and affine graph. The former one considers ViT tokens as nodes and describes their spatial interaction, while the latter one regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: a) a sweet spot of the aggregation graph leads to ViTs with significantly improved predictive performance; b) the graph measures of clustering coefficient and average path length are two effective indicators of model prediction performance, especially when applying on the datasets with small samples; c) our findings are consistent across various ViT architectures and multiple datasets; d) the proposed relational graph representation of ViT has high similarity with real BNNs derived from brain science data. Overall, our work provides a novel unified and biologically-plausible paradigm for more interpretable and effective representation of ViT ANNs.

READ FULL TEXT
research
07/13/2020

Graph Structure of Neural Networks

Neural networks are often represented as graphs of connections between n...
research
05/22/2022

Relphormer: Relational Graph Transformer for Knowledge Graph Representation

Transformers have achieved remarkable performance in widespread fields, ...
research
06/22/2022

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations

Artificial neural networks (ANNs), originally inspired by biological neu...
research
06/10/2022

NAGphormer: Neighborhood Aggregation Graph Transformer for Node Classification in Large Graphs

Graph Transformers have demonstrated superiority on various graph learni...
research
08/07/2021

Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs

Transformer neural networks have achieved state-of-the-art results for u...
research
05/26/2018

Biologically Motivated Algorithms for Propagating Local Target Representations

Finding biologically plausible alternatives to back-propagation of error...
research
09/28/2022

On the visual analytic intelligence of neural networks

Visual oddity task was conceived as a universal ethnic-independent analy...

Please sign up or login with your details

Forgot password? Click here to reset