Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning

03/28/2023
by   Antonio Carta, et al.
0

Distributed learning on the edge often comprises self-centered devices (SCD) which learn local tasks independently and are unwilling to contribute to the performance of other SDCs. How do we achieve forward transfer at zero cost for the single SCDs? We formalize this problem as a Distributed Continual Learning scenario, where SCD adapt to local tasks and a CL model consolidates the knowledge from the resulting stream of models without looking at the SCD's private data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method that consolidates the stream of SC models without using the original data. DAC performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in reharsal-free and distributed CL scenarios. Somewhat surprisingly, even a single out-of-distribution image is sufficient as the only source of data during consolidation.

READ FULL TEXT

page 7

page 12

page 13

research
07/03/2021

Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network

Continual learning has been a major problem in the deep learning communi...
research
03/31/2022

A Closer Look at Rehearsal-Free Continual Learning

Continual learning describes a setting where machine learning models lea...
research
09/06/2023

Rethinking Momentum Knowledge Distillation in Online Continual Learning

Online Continual Learning (OCL) addresses the problem of training neural...
research
09/18/2023

Looking through the past: better knowledge retention for generative replay in continual learning

In this work, we improve the generative replay in a continual learning s...
research
09/13/2023

Continual Learning with Dirichlet Generative-based Rehearsal

Recent advancements in data-driven task-oriented dialogue systems (ToDs)...
research
03/19/2021

Online Lifelong Generalized Zero-Shot Learning

Methods proposed in the literature for zero-shot learning (ZSL) are typi...
research
06/14/2023

Heterogeneous Continual Learning

We propose a novel framework and a solution to tackle the continual lear...

Please sign up or login with your details

Forgot password? Click here to reset