Hop-Aware Dimension Optimization for Graph Neural Networks

05/30/2021
by   Ailing Zeng, et al.
0

In Graph Neural Networks (GNNs), the embedding of each node is obtained by aggregating information with its direct and indirect neighbors. As the messages passed among nodes contain both information and noise, the critical issue in GNN representation learning is how to retrieve information effectively while suppressing noise. Generally speaking, interactions with distant nodes usually introduce more noise for a particular node than those with close nodes. However, in most existing works, the messages being passed among nodes are mingled together, which is inefficient from a communication perspective. Mixing the information from clean sources (low-order neighbors) and noisy sources (high-order neighbors) makes discriminative feature extraction challenging. Motivated by the above, we propose a simple yet effective ladder-style GNN architecture, namely LADDER-GNN. Specifically, we separate messages from different hops and assign different dimensions for them before concatenating them to obtain the node representation. Such disentangled representations facilitate extracting information from messages passed from different hops, and their corresponding dimensions are determined with a reinforcement learning-based neural architecture search strategy. The resulted hop-aware representations generally contain more dimensions for low-order neighbors and fewer dimensions for high-order neighbors, leading to a ladder-style aggregation scheme. We verify the proposed LADDER-GNN on several semi-supervised node classification datasets. Experimental results show that the proposed simple hop-aware representation learning solution can achieve state-of-the-art performance on most datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset