Tokenization Repair in the Presence of Spelling Errors

10/15/2020
by   Hannah Bast, et al.
0

We consider the following tokenization repair problem: Given a natural language text with any combination of missing or spurious spaces, correct these. Spelling errors can be present, but it's not part of the problem to correct them. For example, given: "Tispa per isabout token izaionrep air", compute "Tis paper is about tokenizaion repair". It is tempting to think of this problem as a special case of spelling correction or to treat the two problems together. We make a case that tokenization repair and spelling correction should and can be treated as separate problems. We investigate a variety of neural models as well as a number of strong baselines. We identify three main ingredients to high-quality tokenization repair: deep language models with a bidirectional component, training the models on text with spelling errors, and making use of the space information already present. Our best methods can repair all tokenization errors on 97.5 spelled test sentences and on 96.0 spaces removed from the given text (the scenario from previous work), the accuracy falls to 94.5 analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset