COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional Networks

05/15/2022
by   Sumit K. Mandal, et al.
0

Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data found inherently in many application areas. GCNs distribute the outputs of neural networks embedded in each vertex over multiple iterations to take advantage of the relations captured by the underlying graphs. Consequently, they incur a significant amount of computation and irregular communication overheads, which call for GCN-specific hardware accelerators. To this end, this paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration. Besides accelerating the computation using custom compute elements (CE) and in-memory computing, COIN aims at minimizing the intra- and inter-CE communication in GCN operations to optimize the performance and energy efficiency. Experimental evaluations with widely used datasets show up to 105x improvement in energy consumption compared to state-of-the-art GCN accelerator.

READ FULL TEXT

page 7

page 8

page 9

page 10

page 11

page 14

research
11/10/2021

SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for Graph Similarity Computation

While there have been many studies on hardware acceleration for deep lea...
research
03/01/2022

GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for Memory-Efficient Graph Convolutional Neural Networks

Graph convolutional neural networks (GCNs) have emerged as a key technol...
research
07/05/2019

RED: A ReRAM-based Deconvolution Accelerator

Deconvolution has been widespread in neural networks. For example, it is...
research
09/26/2020

Rubik: A Hierarchical Architecture for Efficient Graph Learning

Graph convolutional network (GCN) emerges as a promising direction to le...
research
07/15/2022

Multi-node Acceleration for Large-scale GCNs

Limited by the memory capacity and compute power, singe-node graph convo...
research
01/25/2023

SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators

Graph convolutional networks (GCNs) are becoming increasingly popular as...
research
01/24/2023

Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators

Graph convolutional networks (GCNs) are becoming increasingly popular as...

Please sign up or login with your details

Forgot password? Click here to reset