A Neural Grammatical Error Correction System Built On Better Pre-Training And Sequential Transfer Learning

INNOVATIVE USE OF NLP FOR BUILDING EDUCATIONAL APPLICATIONS(2019)

引用 82|浏览1
暂无评分
摘要
Grammatical error correction can be viewed as a low-resource sequence-to-sequence task, because publicly available parallel corpora are limited. To tackle this challenge, we first generate erroneous versions of large unannotated corpora using a realistic noising function. The resulting parallel corpora are subsequently used to pre-train Transformer models. Then, by sequentially applying transfer learning, we adapt these models to the domain and style of the test set. Combined with a context-aware neural spellchecker, our system achieves competitive results in both restricted and low resource tracks in ACL 2019 BEA Shared Task. We release all of our code and materials for reproducibility.
更多
查看译文
关键词
learning,correction,pre-training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要