Temporal Autoencoding Improves Generative Models of Time Series

09/12/2013
by   Chris Häusler, et al.
0

Restricted Boltzmann Machines (RBMs) are generative models which can learn useful representations from samples of a dataset in an unsupervised fashion. They have been widely employed as an unsupervised pre-training method in machine learning. RBMs have been modified to model time series in two main ways: The Temporal RBM stacks a number of RBMs laterally and introduces temporal dependencies between the hidden layer units; The Conditional RBM, on the other hand, considers past samples of the dataset as a conditional bias and learns a representation which takes these into account. Here we propose a new training method for both the TRBM and the CRBM, which enforces the dynamic structure of temporal datasets. We do so by treating the temporal models as denoising autoencoders, considering past frames of the dataset as corrupted versions of the present frame and minimizing the reconstruction error of the present data by the model. We call this approach Temporal Autoencoding. This leads to a significant improvement in the performance of both models in a filling-in-frames task across a number of datasets. The error reduction for motion capture data is 56% for the CRBM and 80% for the TRBM. Taking the posterior mean prediction instead of single samples further improves the model's estimates, decreasing the error by as much as 91% for the CRBM on motion capture data. We also trained the model to perform forecasting on a large number of datasets and have found TA pretraining to consistently improve the performance of the forecasts. Furthermore, by looking at the prediction error across time, we can see that this improvement reflects a better representation of the dynamics of the data as opposed to a bias towards reconstructing the observed data on a short time scale.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2023

Regular Time-series Generation using SGM

Score-based generative models (SGMs) are generative models that are in t...
research
04/18/2018

Deep Generative Networks For Sequence Prediction

This thesis investigates unsupervised time series representation learnin...
research
12/09/2022

Towards Better Long-range Time Series Forecasting using Generative Forecasting

Long-range time series forecasting is usually based on one of two existi...
research
06/16/2023

Structural Restricted Boltzmann Machine for image denoising and classification

Restricted Boltzmann Machines are generative models that consist of a la...
research
05/06/2014

Is Joint Training Better for Deep Auto-Encoders?

Traditionally, when generative models of data are developed via deep arc...
research
12/24/2022

Deep Latent State Space Models for Time-Series Generation

Methods based on ordinary differential equations (ODEs) are widely used ...
research
05/05/2016

Rank Ordered Autoencoders

A new method for the unsupervised learning of sparse representations usi...

Please sign up or login with your details

Forgot password? Click here to reset