Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning

by   Zeyin Song, et al.

Few-shot class-incremental learning (FSCIL) aims at learning to classify new classes continually from limited samples without forgetting the old classes. The mainstream framework tackling FSCIL is first to adopt the cross-entropy (CE) loss for training at the base session, then freeze the feature extractor to adapt to new classes. However, in this work, we find that the CE loss is not ideal for the base session training as it suffers poor class separation in terms of representations, which further degrades generalization to novel classes. One tempting method to mitigate this problem is to apply an additional naive supervised contrastive learning (SCL) in the base session. Unfortunately, we find that although SCL can create a slightly better representation separation among different base classes, it still struggles to separate base classes and new classes. Inspired by the observations made, we propose Semantic-Aware Virtual Contrastive model (SAVC), a novel method that facilitates separation between new classes and base classes by introducing virtual classes to SCL. These virtual classes, which are generated via pre-defined transformations, not only act as placeholders for unseen classes in the representation space, but also provide diverse semantic information. By learning to recognize and contrast in the fantasy space fostered by virtual classes, our SAVC significantly boosts base class separation and novel class generalization, achieving new state-of-the-art performance on the three widely-used FSCIL benchmark datasets. Code is available at: https://github.com/zysong0113/SAVC.


page 7

page 8


Balanced Supervised Contrastive Learning for Few-Shot Class-Incremental Learning

Few-shot class-incremental learning (FSCIL) presents the primary challen...

Constrained Few-shot Class-incremental Learning

Continually learning new classes from fresh data without forgetting prev...

Evolving Dictionary Representation for Few-shot Class-incremental Learning

New objects are continuously emerging in the dynamically changing world ...

End-to-end One-shot Human Parsing

Previous human parsing models are limited to parsing humans into pre-def...

Forward Compatible Few-Shot Class-Incremental Learning

Novel classes frequently arise in our dynamically changing world, e.g., ...

Image-Object-Specific Prompt Learning for Few-Shot Class-Incremental Learning

While many FSCIL studies have been undertaken, achieving satisfactory pe...

Contrastive Learning for Prompt-Based Few-Shot Language Learners

The impressive performance of GPT-3 using natural language prompts and i...

Please sign up or login with your details

Forgot password? Click here to reset