Improving Attention Mechanism in Graph Neural Networks via Cardinality Preservation

by   Shuo Zhang, et al.

Graph Neural Networks (GNNs) are powerful to learn the representation of graph-structured data. Most of the GNNs use the message-passing scheme, where the embedding of a node is iteratively updated by aggregating the information of its neighbors. To achieve a better expressive capability of node influences, attention mechanism has grown to become a popular way to assign trainable weights of a node's neighbors in the aggregation. However, though the attention-based GNNs have achieved state-of-the-art results on several tasks, a clear understanding of their discriminative capacities is missing. In this work, we present a theoretical analysis of the representational properties of the GNN that adopts attention mechanism as an aggregator. In the analysis, we show all of the cases when those GNNs always fail to distinguish distinct structures. The finding shows existing attention-based aggregators fail to preserve the cardinality of the multiset of node feature vectors in the aggregation, thus limits their discriminative ability. To improve the performance of attention-based GNNs, we propose two cardinality preserved modifications that can be applied to any kind of attention mechanisms. We evaluate them in our GNN framework on benchmark datasets for graph classification. The results validate the improvements and show the competitive performance of our models.


page 1

page 2

page 3

page 4


Graph Joint Attention Networks

Graph attention networks (GATs) have been recognized as powerful tools f...

Demystifying Oversmoothing in Attention-Based Graph Neural Networks

Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon w...

MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution

Graph Neural Networks (GNNs) have shown expressive performance on graph ...

Stochastic Aggregation in Graph Neural Networks

Graph neural networks (GNNs) manifest pathologies including over-smoothi...

Causal-Based Supervision of Attention in Graph Neural Network: A Better and Simpler Choice towards Powerful Attention

In recent years, attention mechanisms have demonstrated significant pote...

Fast Graph Attention Networks Using Effective Resistance Based Graph Sparsification

The attention mechanism has demonstrated superior performance for infere...

Fisher Information Embedding for Node and Graph Learning

Attention-based graph neural networks (GNNs), such as graph attention ne...

Please sign up or login with your details

Forgot password? Click here to reset