RG PA at SemEval-2021 Task 1 - A Contextual Attention-based Model with RoBERTa for Lexical Complexity Prediction.

SemEval@ACL/IJCNLP(2021)

引用 0|浏览8
暂无评分
摘要
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa. First, we perform the first-stage fine-tune on corpus with RoBERTa, so that the model can learn some prior domain knowledge. Then we get the contextual embedding of context words based on the token-level embedding with the fine-tuned model. And we use Kfold cross-validation to get K models and ensemble them to get the final result. Finally, we attain the 2nd place in the final evaluation phase of sub-task 2 with pearson correlation of 0.8575.
更多
查看译文
关键词
Sequence-to-Sequence Learning,Syntax-based Translation Models,Language Modeling,Topic Modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要