Multilingual \textit{k}-Nearest-Neighbor Machine Translation

EMNLP 2023(2023)

引用 0|浏览6
暂无评分
摘要
\textit{k}-nearest-neighbor machine translation has demonstrated remarkable improvements in machine translation quality by creating a datastore of cached examples. However, these improvements have been limited to high-resource language pairs, with large datastores, and remain a challenge for low-resource languages. In this paper, we address this issue by combining representations from multiple languages into a single datastore. Our results consistently demonstrate substantial improvements not only in low-resource translation quality (up to $+3.6$ BLEU), but also for high-resource translation quality (up to $+0.5$ BLEU). Our experiments show that it is possible to create multilingual datastores that are a quarter of the size, achieving a 5.3x speed improvement, by using linguistic similarities for datastore creation.\footnote{We will release our code upon acceptance.}
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要