Few-shot Continual Learning: a Brain-inspired Approach

by   Liyuan Wang, et al.

It is an important yet challenging setting to continually learn new tasks from a few examples. Although numerous efforts have been devoted to either continual learning or few-shot learning, little work has considered this new setting of few-shot continual learning (FSCL), which needs to minimize the catastrophic forgetting to the old tasks and gradually improve the ability of few-shot generalization. In this paper, we provide a first systematic study on FSCL and present an effective solution with deep neural networks. Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization, which makes it highly nontrivial to extend few-shot learning strategies to continual learning scenarios. We draw inspirations from the robust brain system and develop a method that (1) interdependently updates a pair of fast / slow weights for continual learning and few-shot learning to disentangle their divergent objectives, inspired by the biological model of meta-plasticity and fast / slow synapse; and (2) applies a brain-inspired two-step consolidation strategy to learn a task sequence without forgetting in the fast weights while improve generalization without overfitting in the slow weights. Extensive results on various benchmarks show that our method achieves a better performance than joint training of all the tasks ever seen. The ability of few-shot generalization is also substantially improved from incoming tasks and examples.


page 4

page 8


Few-Shot Self Reminder to Overcome Catastrophic Forgetting

Deep neural networks are known to suffer the catastrophic forgetting pro...

Few-Shot and Continual Learning with Attentive Independent Mechanisms

Deep neural networks (DNNs) are known to perform well when deployed to t...

Generalising via Meta-Examples for Continual Learning in the Wild

Learning quickly and continually is still an ambitious task for neural n...

Continual Few-Shot Learning Using HyperTransformers

We focus on the problem of learning without forgetting from multiple tas...

IF2Net: Innately Forgetting-Free Networks for Continual Learning

Continual learning can incrementally absorb new concepts without interfe...

Few-Shot Unsupervised Continual Learning through Meta-Examples

In real-world applications, data do not reflect the ones commonly used f...

Continual One-Shot Learning of Hidden Spike-Patterns with Neural Network Simulation Expansion and STDP Convergence Predictions

This paper presents a constructive algorithm that achieves successful on...

Please sign up or login with your details

Forgot password? Click here to reset