Incremental Training of Graph Neural Networks on Temporal Graphs under Distribution Shift

by   Lukas Galke, et al.

Current graph neural networks (GNNs) are promising, especially when the entire graph is known for training. However, it is not yet clear how to efficiently train GNNs on temporal graphs, where new vertices, edges, and even classes appear over time. We face two challenges: First, shifts in the label distribution (including the appearance of new labels), which require adapting the model. Second, the growth of the graph, which makes it, at some point, infeasible to train over all vertices and edges. We address these issues by applying a sliding window technique, i.e., we incrementally train GNNs on limited window sizes and analyze their performance. For our experiments, we have compiled three new temporal graph datasets based on scientific publications and evaluate isotropic and anisotropic GNN architectures. Our results show that both GNN types provide good results even for a window size of just 1 time step. With window sizes of 3 to 4 time steps, GNNs achieve at least 95 sizes of 6 or 8, at least 99 have direct consequences for training GNNs over temporal graphs. We provide the code ( and the newly compiled datasets ( for reproducibility and reuse.


page 1

page 2

page 3

page 4


A Unified Lottery Ticket Hypothesis for Graph Neural Networks

With graphs rapidly growing in size and deeper graph neural networks (GN...

Learning the Network of Graphs for Graph Neural Networks

Graph neural networks (GNNs) have achieved great success in many scenari...

Quantifying the Reproducibility of Graph Neural Networks using Multigraph Brain Data

Graph neural networks (GNNs) have witnessed an unprecedented proliferati...

Less Can Be More: Unsupervised Graph Pruning for Large-scale Dynamic Graphs

The prevalence of large-scale graphs poses great challenges in time and ...

Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions

Training and deploying graph neural networks (GNNs) remains difficult du...

Towards Better Evaluation of GNN Expressiveness with BREC Dataset

Research on the theoretical expressiveness of Graph Neural Networks (GNN...

On Representing Mixed-Integer Linear Programs by Graph Neural Networks

While Mixed-integer linear programming (MILP) is NP-hard in general, pra...

Please sign up or login with your details

Forgot password? Click here to reset