Few-shot learning with attention-based sequence-to-sequence models

11/08/2018
by   Bertrand Higy, et al.
0

End-to-end approaches have recently become popular as a means of simplifying the training and deployment of speech recognition systems. However, they often require large amounts of data to perform well on large vocabulary tasks. With the aim of making end-to-end approaches usable by a broader range of researchers, we explore the potential to use end-to-end methods in small vocabulary contexts where smaller datasets may be used. A significant drawback of small-vocabulary systems is the difficulty of expanding the vocabulary beyond the original training samples -- therefore we also study strategies to extend the vocabulary with only few examples per new class (few-shot learning). Our results show that an attention-based encoder-decoder can be competitive against a strong baseline on a small vocabulary keyword classification task, reaching 97.5 shows promising results on the few-shot learning problem where a simple strategy achieved 34.8 each new class. This score goes up to 80.3

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset