Pre-trained Language Models in Multi-Goal Conversational Recommender Systems

스마트미디어저널(2023)

引用 0|浏览0
暂无评分
摘要
In this paper, we examine pre-trained language models used in Multi-Goal Conversational Recommender Systems (MG-CRS), comparing and analyzing their performances of various pre-trained language models. Specifically, we investigate the impact of the sizes of language models on the performance of MG-CRS. The three types of language models of BERT, GPT2, and BART, and compare their accuracy in two tasks of 'type prediction and topic prediction on the MG-CRS dataset, DuRecDial 2.0. Experimental results show that all models demonstrate excellent performance in the type prediction task, but provide significant in performance depending on models or their sizes in the topic prediction task. Based on these findings, the study provides directions for improving the performance of MG-CRS.
更多
查看译文
关键词
language models,pre-trained,multi-goal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要