Embarrassingly Easy Document-Level MT Metrics: How to Convert Any Pretrained Metric Into a Document-Level Metric

09/27/2022
by   Giorgos Vernikos, et al.
0

We hypothesize that existing sentence-level machine translation (MT) metrics become less effective when the human reference contains ambiguities. To verify this hypothesis, we present a very simple method for extending pretrained metrics to incorporate context at the document level. We apply our method to three popular metrics, BERTScore, Prism, and COMET, and to the reference free metric COMET-QE. We evaluate the extended metrics on the WMT 2021 metrics shared task using the provided MQM annotations. Our results show that the extended metrics outperform their sentence-level counterparts in about 85 the tested conditions, when excluding results on low-quality human references. Additionally, we show that our document-level extension of COMET-QE dramatically improves its accuracy on discourse phenomena tasks, outperforming a dedicated baseline by up to 6.1 initial hypothesis and show that a simple extension of the metrics permits them to take advantage of context to resolve ambiguities in the reference.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset