Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions

04/03/2021
by   Shyam A. Tailor, et al.
0

Training and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency. In this work we present a new type of GNN architecture that achieves state-of-the-art performance with lower memory consumption and latency, along with characteristics suited to accelerator implementation. Our proposal uses memory proportional to the number of vertices in the graph, in contrast to competing methods which require memory proportional to the number of edges; we find our efficient approach actually achieves higher accuracy than competing approaches across 5 large and varied datasets against strong baselines. We achieve our results by using a novel adaptive filtering approach inspired by signal processing; it can be interpreted as enabling each vertex to have its own weight matrix, and is not related to attention. Following our focus on efficient hardware usage, we propose aggregator fusion, a technique to enable GNNs to significantly boost their representational power, with only a small increase in latency of 19 standard sparse matrix multiplication. Code and pretrained models can be found at this URL: https://github.com/shyam196/egc.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset