SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures

05/10/2022
by   Yunjae Lee, et al.
0

Graph neural networks (GNNs) can extract features by learning both the representation of each objects (i.e., graph nodes) and the relationship across different objects (i.e., the edges that connect nodes), achieving state-of-the-art performance in various graph-based tasks. Despite its strengths, utilizing these algorithms in a production environment faces several challenges as the number of graph nodes and edges amount to several billions to hundreds of billions scale, requiring substantial storage space for training. Unfortunately, state-of-the-art ML frameworks employ an in-memory processing model which significantly hampers the productivity of ML practitioners as it mandates the overall working set to fit within DRAM capacity. In this work, we first conduct a detailed characterization on a state-of-the-art, large-scale GNN training algorithm, GraphSAGE. Based on the characterization, we then explore the feasibility of utilizing capacity-optimized NVM SSDs for storing memory-hungry GNN data, which enables large-scale GNN training beyond the limits of main memory size. Given the large performance gap between DRAM and SSD, however, blindly utilizing SSDs as a direct substitute for DRAM leads to significant performance loss. We therefore develop SmartSAGE, our software/hardware co-design based on an in-storage processing (ISP) architecture. Our work demonstrates that an ISP based large-scale GNN training system can achieve both high capacity storage and high performance, opening up opportunities for ML practitioners to train large GNN datasets without being hampered by the physical limitations of main memory size.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 9

page 10

page 11

research
06/28/2023

Accelerating Sampling and Aggregation Operations in GNN Frameworks with GPU Initiated Direct Storage Accesses

Graph Neural Networks (GNNs) are emerging as a powerful tool for learnin...
research
08/19/2022

Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching

Recently, Graph Neural Networks (GNNs) have been receiving a spotlight a...
research
01/23/2022

Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs

Graph neural networks (GNNs) process large-scale graphs consisting of a ...
research
11/10/2022

A Comprehensive Survey on Distributed Training of Graph Neural Networks

Graph neural networks (GNNs) have been demonstrated to be a powerful alg...
research
08/16/2023

Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design

Graph neural networks (GNNs) have shown significant accuracy improvement...
research
10/11/2020

DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs

Graph neural networks (GNN) have shown great success in learning from gr...
research
07/25/2022

Benchmarking GNN-Based Recommender Systems on Intel Optane Persistent Memory

Graph neural networks (GNNs), which have emerged as an effective method ...

Please sign up or login with your details

Forgot password? Click here to reset