Identification and Generation of Actions Using Pre-trained Language Models.

WISE(2023)

引用 0|浏览5
暂无评分
摘要
Email is important in day-to-day communication and its volume is increasing rapidly in the workplace. Many tasks or actions that need to be tracked are contained in emails, but workers find this tracking difficult. An automatic way to find these actions would improve workplace productivity. We focused on the following problems in this paper: 1) identifying actions such as requests and commitments in emails, and 2) generating actionable text from the context of the emails. Recently, pre-trained language models trained on large unlabelled corpus have achieved state-of-the-art results in NLP tasks. In the present study, a combination of two pre-trained models, Bidirectional Encoder Representations from Transformers (BERT) and Text-to-Text Transfer Transformer (T5), is used to identify and generate actions from emails. In our method, the first step is to extract actions from emails using BERT sequence classification. The second step is to generate meaningful actionable text using T5 summarization. The Enron People Assignment (EPA) dataset is used for the evaluation of these methods on both large and small datasets. The BERT sequence classification model is evaluated against other language models and machine learning models. The results show that the BERT model outperforms other machine learning models for the action identification task, and the generated text from the summarization model shows significant improvement over the action sentence. Thus, the contribution of this paper is a state-of-the-art model for identifying actions and generating actionable text by leveraging pre-trained models.
更多
查看译文
关键词
language models,actions,pre-trained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要