Research Paper Classification and Recommendation System based-on Fine-Tuning BERT.
IRI(2023)
摘要
In this paper, we compare the performance of two popular NLP models, pre-train fine-tuned BERT and BiLSTM with combined CNN, in terms of the classification and recommendation tasks of research papers. We conduct the performance evaluation of these two models with research journal benchmark dataset. Performance results show that the pre-train fine-tuned BERT model is superior to CNN-BiLSTM combined model in terms of classification performance.
更多查看译文
关键词
NLP, CNN, BiLSTM, BERT, Fine-tuning Model
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要