Modeling Long Context for Task-Oriented Dialogue State Generation

04/29/2020
by   Jun Quan, et al.
0

Based on the recently proposed transferable dialogue state generator (TRADE) that predicts dialogue states from utterance-concatenated dialogue context, we propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for task-oriented dialogue state generation. By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long. In our experiments, our proposed model achieves a 7.03 new state-of-the-art joint goal accuracy of 52.04

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset