Semantic-based Self-Critical Training For Question Generation

08/26/2021
by   Kwate Dassi, et al.
0

We present in this work a fully Transformer-based reinforcement learning generator-evaluator architecture for neural question generation. Question generation is a task that consists in generating questions given a context and answer. To improve the quality of the generated question, we came up with a semantic-based self-critical training layout in generator-evaluator architecture, which goes beyond typical maximum likelihood training. Evaluation metrics for language modeling only based on n-gram overlapping do not consider semantic relations between reference and candidate strings. To improve the evaluation step, we assess our model for both n-gram overlap using BLEU and semantically using BERTScore and NUBIA, a novel state-of-the-art evaluation metric for text generation. Question generation could be used in many downstream applications, including in extending question answering datasets, conversational systems, and educational assessment systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset