Integrating Word Embeddings into IBM Word Alignment Models

2018 10th International Conference on Knowledge and Systems Engineering (KSE)(2018)

引用 3|浏览6
暂无评分
摘要
Word alignment models are used to generate word-aligned parallel text which is used in statistical machine translation systems. Currently, the most popular word alignment models are IBM models which have been widely applied in a large number of translation systems. The parameters of IBM models are estimated by using Maximum Likelihood principle, i.e. by counting the co-occurrence of words in the parallel text. This way of parameter estimation leads to the “ambiguity” problem when some words stand together in many sentence pairs but each of them is not translation of any other. Additionally, this method requires large amount of training data to achieve good results. However, parallel text which is used to train the IBM models is usually limited for low-resource languages. In this work, we try to solve these two problems by adding semantic information to the models. Our semantic information is derived from word embeddings which only need monolingual data to train. We deploy evaluation on a language pair that has great differences in grammar structure, English-Vietnamese. Even with this challenged task, our proposed models gain significant improvements in word alignment result and help increasing translation quality.
更多
查看译文
关键词
IBM models,word embeddings,word alignment,Vietnamese,bilingual mapping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要