A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

by   Zhenyi Wang, et al.

Forgetting refers to the loss or deterioration of previously acquired information or knowledge. While the existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new tasks, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we aim to present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, in future work, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications. A comprehensive list of papers about forgetting in various research fields is available at <https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning>.


page 1

page 2

page 3

page 4


Continual Learning of Natural Language Processing Tasks: A Survey

Continual learning (CL) is an emerging learning paradigm that aims to em...

On robustness of generative representations against catastrophic forgetting

Catastrophic forgetting of previously learned knowledge while learning n...

Meta-Learning Representations for Continual Learning

A continual learning agent should be able to build on top of existing kn...

Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning

Federated Learning (FL) has gained significant attraction due to its abi...

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to ...

Intentional Forgetting

Many damaging cybersecurity attacks are enabled when an attacker can acc...

HAT-CL: A Hard-Attention-to-the-Task PyTorch Library for Continual Learning

Catastrophic forgetting, the phenomenon in which a neural network loses ...

Please sign up or login with your details

Forgot password? Click here to reset