Scalable Graph Embeddings via Sparse Transpose Proximities

05/16/2019
by   Zhewei Wei, et al.
0

Graph embedding learns low-dimensional representations for nodes in a graph and effectively preserves the graph structure. Recently, a significant amount of progress has been made toward this emerging research area. However, there are several fundamental problems that remain open. First, existing methods fail to preserve the out-degree distributions on directed graphs. Second, many existing methods employ random walk based proximities and thus suffer from conflicting optimization goals on undirected graphs. Finally, existing factorization methods are unable to achieve scalability and non-linearity simultaneously. This paper presents an in-depth study on graph embedding techniques on both directed and undirected graphs. We analyze the fundamental reasons that lead to the distortion of out-degree distributions and to the conflicting optimization goals. We propose transpose proximity, a unified approach that solves both problems. Based on the concept of transpose proximity, we design , a factorization based graph embedding algorithm that achieves scalability and non-linearity simultaneously. makes use of the backward push algorithm to efficiently compute the sparse Personalized PageRank (PPR) as its transpose proximities. By imposing the sparsity constraint, we are able to apply non-linear operations to the proximity matrix and perform efficient matrix factorization to derive the embedding vectors. Finally, we present an extensive experimental study that evaluates the effectiveness of various graph embedding algorithms, and we show that outperforms the state-of-the-art methods in terms of effectiveness and scalability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset