Revisiting Heterophily For Graph Neural Networks

10/14/2022
by   Sitao Luan, et al.
27

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption). While GNNs have been commonly believed to outperform NNs in real-world tasks, recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory. Heterophily has been considered the main cause of this empirical observation and numerous works have been put forward to address it. In this paper, we first revisit the widely used homophily metrics and point out that their consideration of only graph-label consistency is a shortcoming. Then, we study heterophily from the perspective of post-aggregation node similarity and define new homophily metrics, which are potentially advantageous compared to existing ones. Based on this investigation, we prove that some harmful cases of heterophily can be effectively addressed by local diversification operation. Then, we propose the Adaptive Channel Mixing (ACM), a framework to adaptively exploit aggregation, diversification and identity channels node-wisely to extract richer localized information for diverse node heterophily situations. ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs and is easy to be implemented in baseline GNN layers. When evaluated on 10 benchmark node classification tasks, ACM-augmented baselines consistently achieve significant performance gain, exceeding state-of-the-art GNNs on most tasks without incurring significant computational burden.

READ FULL TEXT
research
09/12/2021

Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node Classification?

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using...
research
06/06/2021

On Local Aggregation in Heterophilic Graphs

Many recent works have studied the performance of Graph Neural Networks ...
research
07/22/2022

Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective

Graph neural networks (GNNs) have shown superiority in many prediction t...
research
10/30/2022

When Do We Need GNN for Node Classification?

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by addit...
research
12/21/2022

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Neural Networks

The core operation of current Graph Neural Networks (GNNs) is the aggreg...
research
08/20/2020

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

The core operation of Graph Neural Networks (GNNs) is the aggregation en...
research
03/05/2021

Unified Robust Training for Graph NeuralNetworks against Label Noise

Graph neural networks (GNNs) have achieved state-of-the-art performance ...

Please sign up or login with your details

Forgot password? Click here to reset