Effectiveness of Text, Acoustic, and Lattice-based representations in Spoken Language Understanding tasks

12/16/2022
by   Esaú Villatoro-Tello, et al.
28

In this paper, we perform an exhaustive evaluation of different representations to address the intent classification problem in a Spoken Language Understanding (SLU) setup. We benchmark three types of systems to perform the SLU intent detection task: 1) text-based, 2) lattice-based, and a novel 3) multimodal approach. Our work provides a comprehensive analysis of what could be the achievable performance of different state-of-the-art SLU systems under different circumstances, e.g., automatically- vs. manually-generated transcripts. We evaluate the systems on the publicly available SLURP spoken language resource corpus. Our results indicate that using richer forms of Automatic Speech Recognition (ASR) outputs allows SLU systems to improve in comparison to the 1-best setup (4 However, crossmodal approaches, i.e., learning from acoustic and text embeddings, obtains performance similar to the oracle setup, and a relative improvement of 18 architectures represent a good alternative to overcome the limitations of working purely automatically generated textual data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset