Understanding the Message Passing in Graph Neural Networks via Power Iteration

05/30/2020
by   Xue Li, et al.
0

The mechanism of message passing in graph neural networks(GNNs) is still mysterious for the literature. No one, to our knowledge, has given another possible theoretical origin for GNNs apart from convolutional neural networks. Somewhat to our surprise, the message passing can be best understood in terms of the power iteration. By removing activation functions and layer weights of GNNs, we propose power iteration clustering (SPIC) models which are naturally interpretable and scalable. The experiment shows our models extend the existing GNNs and enhance its capability of processing random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in designing and define a lower limit for model evaluation by randomly initializing the aggregator of message passing. All the findings in this paper push the boundaries of our understanding of neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset