Transfer Learning with Sparse Associative Memories

04/04/2019
by   Quentin Jodelet, et al.
0

In this paper, we introduce a novel layer designed to be used as the output of pre-trained neural networks in the context of classification. Based on Associative Memories, this layer can help design Deep Neural Networks which support incremental learning and that can be (partially) trained in real time on embedded devices. Experiments on the ImageNet dataset and other different domain specific datasets show that it is possible to design more flexible and faster-to-train Neural Networks at the cost of a slight decrease in accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset