Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

by   Muhammad Umer, et al.

Artificial neural networks are well-known to be susceptible to catastrophic forgetting when continually learning from sequences of tasks. Various continual (or "incremental") learning approaches have been proposed to avoid catastrophic forgetting, but they are typically adversary agnostic, i.e., they do not consider the possibility of a malicious attack. In this effort, we explore the vulnerability of Elastic Weight Consolidation (EWC), a popular continual learning algorithm for avoiding catastrophic forgetting. We show that an intelligent adversary can bypass the EWC's defenses, and instead cause gradual and deliberate forgetting by introducing small amounts of misinformation to the model during training. We demonstrate such an adversary's ability to assume control of the model via injection of "backdoor" attack samples on both permuted and split benchmark variants of the MNIST dataset. Importantly, once the model has learned the adversarial misinformation, the adversary can then control the amount of forgetting of any task. Equivalently, the malicious actor can create a "false memory" about any task by inserting carefully-designed backdoor samples to any fraction of the test instances of that task. Perhaps most damaging, we show this vulnerability to be very acute; neural network memory can be easily compromised with the addition of backdoor samples into as little as 1


Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...

Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study

Large amounts of incremental learning algorithms have been proposed to a...

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

In this brief, we show that sequentially learning new information presen...

Continual Competitive Memory: A Neural System for Online Task-Free Lifelong Learning

In this article, we propose a novel form of unsupervised learning, conti...

Lethean Attack: An Online Data Poisoning Technique

Data poisoning is an adversarial scenario where an attacker feeds a spec...

KASAM: Spline Additive Models for Function Approximation

Neural networks have been criticised for their inability to perform cont...

GAN Memory with No Forgetting

Seeking to address the fundamental issue of memory in lifelong learning,...

Please sign up or login with your details

Forgot password? Click here to reset