MT-Ranker: Reference-free machine translation evaluation by inter-system ranking
CoRR(2024)
摘要
Traditionally, Machine Translation (MT) Evaluation has been treated as a
regression problem – producing an absolute translation-quality score. This
approach has two limitations: i) the scores lack interpretability, and human
annotators struggle with giving consistent scores; ii) most scoring methods are
based on (reference, translation) pairs, limiting their applicability in
real-world scenarios where references are absent. In practice, we often care
about whether a new MT system is better or worse than some competitors. In
addition, reference-free MT evaluation is increasingly practical and necessary.
Unfortunately, these two practical considerations have yet to be jointly
explored. In this work, we formulate the reference-free MT evaluation into a
pairwise ranking problem. Given the source sentence and a pair of translations,
our system predicts which translation is better. In addition to proposing this
new formulation, we further show that this new paradigm can demonstrate
superior correlation with human judgments by merely using indirect supervision
from natural language inference and weak supervision from our synthetic data.
In the context of reference-free evaluation, MT-Ranker, trained without any
human annotations, achieves state-of-the-art results on the WMT Shared Metrics
Task benchmarks DARR20, MQM20, and MQM21. On a more challenging benchmark,
ACES, which contains fine-grained evaluation criteria such as addition,
omission, and mistranslation errors, MT-Ranker marks state-of-the-art against
reference-free as well as reference-based baselines.
更多查看译文
关键词
Machine Translation Evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要