Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

by   Tianlong Chen, et al.

Class-incremental learning (CIL) suffers from the notorious dilemma between learning newly added classes and preserving previously learned class knowledge. That catastrophic forgetting issue could be mitigated by storing historical data for replay, which yet would cause memory overheads as well as imbalanced prediction updates. To address this dilemma, we propose to leverage "free" external unlabeled data querying in continual learning. We first present a CIL with Queried Unlabeled Data (CIL-QUD) scheme, where we only store a handful of past training samples as anchors and use them to query relevant unlabeled examples each time. Along with new and past stored data, the queried unlabeled are effectively utilized, through learning-without-forgetting (LwF) regularizers and class-balance training. Besides preserving model generalization over past and current tasks, we next study the problem of adversarial robustness for CIL-QUD. Inspired by the recent success of learning robust models with unlabeled data, we explore a new robustness-aware CIL setting, where the learned adversarial robustness has to resist forgetting and be transferred as new tasks come in continually. While existing options easily fail, we show queried unlabeled data can continue to benefit, and seamlessly extend CIL-QUD into its robustified versions, RCIL-QUD. Extensive experiments demonstrate that CIL-QUD achieves substantial accuracy gains on CIFAR-10 and CIFAR-100, compared to previous state-of-the-art CIL approaches. Moreover, RCIL-QUD establishes the first strong milestone for robustness-aware CIL. Codes are available in https://github.com/VITA-Group/CIL-QUD.


page 1

page 2

page 3

page 4


Learning to Predict Gradients for Semi-Supervised Continual Learning

A key challenge for machine intelligence is to learn new visual concepts...

Incremental Learning with Unlabeled Data in the Wild

Deep neural networks are known to suffer from catastrophic forgetting in...

Robustness-preserving Lifelong Learning via Dataset Condensation

Lifelong learning (LL) aims to improve a predictive model as the data so...

Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data

Deep neural network (DNN) suffers from catastrophic forgetting when lear...

Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling

Contrastive learning approaches have achieved great success in learning ...

ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning

Continual learning usually assumes the incoming data are fully labeled, ...

Gated Class-Attention with Cascaded Feature Drift Compensation for Exemplar-free Continual Learning of Vision Transformers

In this paper we propose a new method for exemplar-free class incrementa...

Please sign up or login with your details

Forgot password? Click here to reset