SSFG: Stochastically Scaling Features and Gradients for Regularizing Graph Convolution Networks
Graph convolutional networks have been successfully applied in various graph-based tasks. In a typical graph convolutional layer, node features are computed by aggregating neighborhood information. Repeatedly applying graph convolutions can cause the oversmoothing issue, i.e., node features converge to similar values. This is one of the major reasons that cause overfitting in graph learning, resulting in the model fitting well to training data while not generalizing well on test data. In this paper, we present a stochastic regularization method to address this issue. In our method, we stochastically scale features and gradients (SSFG) by a factor sampled from a probability distribution in the training procedure. We show that applying stochastic scaling at the feature level is complementary to that at the gradient level in improving the overall performance. When used together with ReLU, our method can be seen as a stochastic ReLU. We experimentally validate our SSFG regularization method on seven benchmark datasets for different graph-based tasks. Extensive experimental results demonstrate that our method effectively improves the overall performance of the baseline graph networks.
READ FULL TEXT