Continual Learning via Inter-Task Synaptic Mapping

06/26/2021
by   Mao Fubing, et al.
0

Learning from streaming tasks leads a model to catastrophically erase unique experiences it absorbs from previous episodes. While regularization techniques such as LWF, SI, EWC have proven themselves as an effective avenue to overcome this issue by constraining important parameters of old tasks from changing when accepting new concepts, these approaches do not exploit common information of each task which can be shared to existing neurons. As a result, they do not scale well to large-scale problems since the parameter importance variables quickly explode. An Inter-Task Synaptic Mapping (ISYANA) is proposed here to underpin knowledge retention for continual learning. ISYANA combines task-to-neuron relationship as well as concept-to-concept relationship such that it prevents a neuron to embrace distinct concepts while merely accepting relevant concept. Numerical study in the benchmark continual learning problems has been carried out followed by comparison against prominent continual learning algorithms. ISYANA exhibits competitive performance compared to state of the arts. Codes of ISYANA is made available in <https://github.com/ContinualAL/ISYANAKBS>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2021

Continual Learning with Neuron Activation Importance

Continual learning is a concept of online learning with multiple sequent...
research
06/28/2021

Unsupervised Continual Learning via Self-Adaptive Deep Clustering Approach

Unsupervised continual learning remains a relatively uncharted territory...
research
11/15/2021

Target Layer Regularization for Continual Learning Using Cramer-Wold Generator

We propose an effective regularization strategy (CW-TaLaR) for solving c...
research
03/31/2021

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

Continual learning is a realistic learning scenario for AI models. Preva...
research
04/12/2021

Continual Learning for Text Classification with Information Disentanglement Based Regularization

Continual learning has become increasingly important as it enables NLP m...
research
09/06/2022

Continual Learning: Fast and Slow

According to the Complementary Learning Systems (CLS) theory <cit.> in n...
research
08/14/2023

CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation

Vision-Language Pretraining (VLP) has shown impressive results on divers...

Please sign up or login with your details

Forgot password? Click here to reset