Continual Variational Autoencoder Learning via Online Cooperative Memorization

07/20/2022
by   Fei Ye, et al.
0

Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks. However, their ability to generate images with specifications corresponding to the classes and databases learned during Continual Learning (CL) is not well understood and catastrophic forgetting remains a significant challenge. In this paper, we firstly analyze the forgetting behaviour of VAEs by developing a new theoretical framework that formulates CL as a dynamic optimal transport problem. This framework proves approximate bounds to the data likelihood without requiring the task information and explains how the prior knowledge is lost during the training process. We then propose a novel memory buffering approach, namely the Online Cooperative Memorization (OCM) framework, which consists of a Short-Term Memory (STM) that continually stores recent samples to provide future information for the model, and a Long-Term Memory (LTM) aiming to preserve a wide diversity of samples. The proposed OCM transfers certain samples from STM to LTM according to the information diversity selection criterion without requiring any supervised signals. The OCM framework is then combined with a dynamic VAE expansion mixture network for further enhancing its performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2019

Continual Learning in Deep Neural Network by Using a Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
05/20/2023

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Task-incremental continual learning refers to continually training a mod...
research
10/12/2022

Task-Free Continual Learning via Online Discrepancy Distance Learning

Learning from non-stationary data streams, also called Task-Free Continu...
research
05/20/2019

Continual Learning in Deep Neural Networks by Using a Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
05/20/2019

Continual Learning in Deep Neural Networks by Using Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
11/30/2022

Continual Learning with Optimal Transport based Mixture Model

Online Class Incremental learning (CIL) is a challenging setting in Cont...
research
08/30/2019

BooVAE: A scalable framework for continual VAE learning under boosting approach

Variational Auto Encoders (VAE) are capable of generating realistic imag...

Please sign up or login with your details

Forgot password? Click here to reset