Zero-shot Text-to-SQL Learning with Auxiliary Task

08/29/2019
by   Shuaichen Chang, et al.
0

Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task. However, little work has paid attention to how these models generalize to realistic unseen data, which naturally raises a question: does this impressive performance signify a perfect generalization model, or are there still some limitations? In this paper, we first diagnose the bottleneck of text-to-SQL task by providing a new testbed, in which we observe that existing models present poor generalization ability on rarely-seen data. The above analysis encourages us to design a simple but effective auxiliary task, which serves as a supportive model as well as a regularization term to the generation task to increase the models generalization. Experimentally, We evaluate our models on a large text-to-SQL dataset WikiSQL. Compared to a strong baseline coarse-to-fine model, our models improve over the baseline by more than 3 accuracy on the whole dataset. More interestingly, on a zero-shot subset test of WikiSQL, our models achieve 5 clearly demonstrating its superior generalizability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset