A Probabilistic Autoencoder for Type Ia Supernovae Spectral Time Series

by   George Stein, et al.

We construct a physically-parameterized probabilistic autoencoder (PAE) to learn the intrinsic diversity of type Ia supernovae (SNe Ia) from a sparse set of spectral time series. The PAE is a two-stage generative model, composed of an Auto-Encoder (AE) which is interpreted probabilistically after training using a Normalizing Flow (NF). We demonstrate that the PAE learns a low-dimensional latent space that captures the nonlinear range of features that exists within the population, and can accurately model the spectral evolution of SNe Ia across the full range of wavelength and observation times directly from the data. By introducing a correlation penalty term and multi-stage training setup alongside our physically-parameterized network we show that intrinsic and extrinsic modes of variability can be separated during training, removing the need for the additional models to perform magnitude standardization. We then use our PAE in a number of downstream tasks on SNe Ia for increasingly precise cosmological analyses, including automatic detection of SN outliers, the generation of samples consistent with the data distribution, and solving the inverse problem in the presence of noisy and incomplete data to constrain cosmological distance measurements. We find that the optimal number of intrinsic model parameters appears to be three, in line with previous studies, and show that we can standardize our test sample of SNe Ia with an RMS of $0.091 \pm 0.010$ mag, which corresponds to $0.074 \pm 0.010$ mag if peculiar velocity contributions are removed. Trained models and codes are released at \href{https://github.com/georgestein/suPAErnova}{github.com/georgestein/suPAErnova}


page 10

page 15

page 16


Probabilistic Auto-Encoder

We introduce the Probabilistic Auto-Encoder (PAE), a generative model wi...

LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral Image Generation with Variance Regularization

Deep learning methods are state-of-the-art for spectral image (SI) compu...

Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach

Representation learning for time series has been an important research a...

Deep Generative Endmember Modeling: An Application to Unsupervised Spectral Unmixing

Endmember (EM) spectral variability can greatly impact the performance o...

Tabular Transformers for Modeling Multivariate Time Series

Tabular datasets are ubiquitous in data science applications. Given thei...

A Hierarchical Bayesian SED Model for Type Ia Supernovae in the Optical to Near-Infrared

While conventional Type Ia supernova (SN Ia) cosmology analyses rely pri...

Poisson Flow Generative Models

We propose a new "Poisson flow" generative model (PFGM) that maps a unif...

Please sign up or login with your details

Forgot password? Click here to reset