Disentangling Interpretable Generative Parameters of Random and Real-World Graphs

10/12/2019
by   Niklas Stoehr, et al.
22

While a wide range of interpretable generative procedures for graphs exist, matching observed graph topologies with such procedures and choices for its parameters remains an open problem. Devising generative models that closely reproduce real-world graphs requires domain knowledge and time-consuming simulation. While existing deep learning approaches rely on less manual modelling, they offer little interpretability. This work approaches graph generation (decoding) as the inverse of graph compression (encoding). We show that in a disentanglement-focused deep autoencoding framework, specifically Beta-Variational Autoencoders (Beta-VAE), choices of generative procedures and their parameters arise naturally in the latent space. Our model is capable of learning disentangled, interpretable latent variables that represent the generative parameters of procedurally generated random graphs and real-world graphs. The degree of disentanglement is quantitatively measured using the Mutual Information Gap (MIG). When training our Beta-VAE model on ER random graphs, its latent variables have a near one-to-one mapping to the ER random graph parameters n and p. We deploy the model to analyse the correlation between graph topology and node attributes measuring their mutual dependence without handpicking topological properties.

READ FULL TEXT

page 3

page 4

page 7

page 8

research
05/03/2021

Recovering Barabási-Albert Parameters of Graphs through Disentanglement

Classical graph modeling approaches such as Erdős Rényi (ER) random grap...
research
04/09/2021

A Graph VAE and Graph Transformer Approach to Generating Molecular Graphs

We propose a combination of a variational autoencoder and a transformer ...
research
03/25/2021

Full Encoder: Make Autoencoders Learn Like PCA

While the beta-VAE family is aiming to find disentangled representations...
research
03/25/2023

Beta-VAE has 2 Behaviors: PCA or ICA?

Beta-VAE is a very classical model for disentangled representation learn...
research
02/25/2021

Physics-Integrated Variational Autoencoders for Robust and Interpretable Generative Modeling

Integrating physics models within machine learning holds considerable pr...
research
10/02/2021

Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis Expansion

Disentanglement learning aims to construct independent and interpretable...

Please sign up or login with your details

Forgot password? Click here to reset