Multilingual AMR Parsing with Noisy Knowledge Distillation

by   Deng Cai, et al.

We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingual setting: there is but one model to parse all different languages including English. We identify that noisy input and precise output are the key to successful distillation. Together with extensive pre-training, we obtain an AMR parser whose performances surpass all previously published results on four different foreign languages, including German, Spanish, Italian, and Chinese, by large margins (up to 18.8 Smatch points on Chinese and on average 11.3 Smatch points). Our parser also achieves comparable performance on English to the latest state-of-the-art English-only parser.


Multilingual Constituency Parsing with Self-Attention and Pre-Training

We extend our previous work on constituency parsing (Kitaev and Klein, 2...

Distilling Neural Networks for Greener and Faster Dependency Parsing

The carbon footprint of natural language processing research has been in...

Bootstrapping a Crosslingual Semantic Parser

Datasets for semantic parsing scarcely consider languages other than Eng...

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

AMR parsing has experienced an unprecendented increase in performance in...

Many Languages, One Parser

We train one multilingual model for dependency parsing and use it to par...

AltDiffusion: A Multilingual Text-to-Image Diffusion Model

Large Text-to-Image(T2I) diffusion models have shown a remarkable capabi...

Parser-Free Virtual Try-on via Distilling Appearance Flows

Image virtual try-on aims to fit a garment image (target clothes) to a p...

Please sign up or login with your details

Forgot password? Click here to reset