ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation

07/23/2020
by   Fei Mi, et al.
0

Session-based recommendation has received growing attention recently due to the increasing privacy concern. Despite the recent success of neural session-based recommenders, they are typically developed in an offline manner using a static dataset. However, recommendation requires continual adaptation to take into account new and obsolete items and users, and requires "continual learning" in real-life applications. In this case, the recommender is updated continually and periodically with new data that arrives in each update cycle, and the updated model needs to provide recommendations for user activities before the next model update. A major challenge for continual learning with neural models is catastrophic forgetting, in which a continually trained model forgets user preference patterns it has learned before. To deal with this challenge, we propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples (i.e., exemplars) to the current model with an adaptive distillation loss. Experiments are conducted based on the state-of-the-art SASRec model using two widely used datasets to benchmark ADER with several well-known continual learning techniques. We empirically demonstrate that ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle. This result reveals that ADER is a promising solution to mitigate the catastrophic forgetting issue towards building more realistic and scalable session-based recommenders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Continual Learning for Natural Language Generation in Task-oriented Dialog Systems

Natural language generation (NLG) is an essential component of task-orie...
research
05/20/2023

Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

Task-incremental continual learning refers to continually training a mod...
research
01/13/2022

Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners

In the SSLAD-Track 3B challenge on continual learning, we propose the me...
research
05/26/2022

Continual Learning for Visual Search with Backward Consistent Feature Embedding

In visual search, the gallery set could be incrementally growing and add...
research
03/23/2023

First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning

In Class-Incremental Learning (CIL) an image classification system is ex...
research
02/07/2023

Keeping Pace with Ever-Increasing Data: Towards Continual Learning of Code Intelligence Models

Previous research on code intelligence usually trains a deep learning mo...
research
09/18/2023

CaT: Balanced Continual Graph Learning with Graph Condensation

Continual graph learning (CGL) is purposed to continuously update a grap...

Please sign up or login with your details

Forgot password? Click here to reset