Are All Edges Necessary? A Unified Framework for Graph Purification

by   Zishan Gu, et al.

Graph Neural Networks (GNNs) as deep learning models working on graph-structure data have achieved advanced performance in many works. However, it has been proved repeatedly that, not all edges in a graph are necessary for the training of machine learning models. In other words, some of the connections between nodes may bring redundant or even misleading information to downstream tasks. In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective. Specifically, it is a framework to purify graphs with the least loss of information, under which the core problems are how to better evaluate the edges and how to delete the relatively redundant edges with the least loss of information. To address the above two problems, we propose several measurements for the evaluation and different judges and filters for the edge deletion. We also introduce a residual-iteration strategy and a surrogate model for measurements requiring unknown information. The experimental results show that our proposed measurements for KL divergence with constraints to maintain the connectivity of the graph and delete edges in an iterative way can find out the most edges while keeping the performance of GNNs. What's more, further experiments show that this method also achieves the best defense performance against adversarial attacks.


page 1

page 2

page 3

page 4


GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Deep learning methods for graphs achieve remarkable performance on many ...

RoGAT: a robust GNN combined revised GAT with adjusted graphs

Graph Neural Networks(GNNs) are useful deep learning models to deal with...

Learning heterophilious edge to drop: A general framework for boosting graph neural networks

Graph Neural Networks (GNNs) aim at integrating node contents with graph...

Adversarial Erasing with Pruned Elements: Towards Better Graph Lottery Ticket

Graph Lottery Ticket (GLT), a combination of core subgraph and sparse su...

Exploring High-Order Structure for Robust Graph Structure Learning

Recent studies show that Graph Neural Networks (GNNs) are vulnerable to ...

Interpretable Sparsification of Brain Graphs: Better Practices and Effective Designs for Graph Neural Networks

Brain graphs, which model the structural and functional relationships be...

Self-organization Preserved Graph Structure Learning with Principle of Relevant Information

Most Graph Neural Networks follow the message-passing paradigm, assuming...

Please sign up or login with your details

Forgot password? Click here to reset