Random Teachers are Good Teachers

by   Felix Sarnthein, et al.

In this work, we investigate the implicit regularization induced by teacher-student learning dynamics. To isolate its effect, we describe a simple experiment where instead of trained teachers, we consider teachers at random initialization. Surprisingly, when distilling a student into such a random teacher, we observe that the resulting model and its representations already possess very interesting characteristics; (1) we observe a strong improvement of the distilled student over its teacher in terms of probing accuracy. (2) The learnt representations are highly transferable between different tasks but deteriorate strongly if trained on random inputs. (3) The student checkpoint suffices to discover so-called lottery tickets, i.e. it contains identifiable, sparse networks that are as performant as the full network. These observations have interesting consequences for several important areas in machine learning: (1) Self-distillation can work solely based on the implicit regularization present in the gradient dynamics without relying on any dark knowledge, (2) self-supervised learning can learn features even in the absence of data augmentation and (3) SGD already becomes stable when initialized from the student checkpoint with respect to batch orderings. Finally, we shed light on an intriguing local property of the loss landscape: the process of feature learning is strongly amplified if the student is initialized closely to the teacher. This raises interesting questions about the nature of the landscape that have remained unexplored so far.


page 6

page 12

page 13

page 16

page 17


Revisiting Self-Distillation

Knowledge distillation is the procedure of transferring "knowledge" from...

On student-teacher deviations in distillation: does it pay to disobey?

Knowledge distillation has been widely-used to improve the performance o...

All Neural Networks are Created Equal

One of the unresolved questions in the context of deep learning is the t...

MOMA:Distill from Self-Supervised Teachers

Contrastive Learning and Masked Image Modelling have demonstrated except...

Self-Supervised Audio-Visual Speech Representations Learning By Multimodal Self-Distillation

In this work, we present a novel method, named AV2vec, for learning audi...

Phase transitions in the mini-batch size for sparse and dense neural networks

The use of mini-batches of data in training artificial neural networks i...

Lifelong Generative Modeling

Lifelong learning is the problem of learning multiple consecutive tasks ...

Please sign up or login with your details

Forgot password? Click here to reset