Extending Transformer to Predict Both the Order and Occurrence Times of Elements in a Sequence.

Hyewon Ryu, Sara Yu,Ki Yong Lee

IEEE International Conference on Big Data and Smart Computing(2024)

引用 0|浏览0
暂无评分
摘要
Recently, sequence prediction techniques using Transformers have become essential in various fields. However, so far Transformers have only focused on predicting the next elements in a sequence and do not predict their occurrence times. Therefore, in this paper, we propose an extension of Transformer to predict not only the next elements but also their occurrence times. For this purpose, we extend Transformer in three ways: (1) We propose a new positional encoding method that can reflect both the order and occurrence time of each element in a sequence, (2) We extend the output layer of Transformer to simultaneously predict the next element and its occurrence time, and (3) We refine the loss function to measure the difference between sequences considering both the order and occurrence times of elements. Through experiments using real datasets, we confirmed that the proposed model more accurately predicts the order and occurrence time of each element than the existing Transformer.
更多
查看译文
关键词
Transformer,Sequence prediction,Positional encoding,Timestamped sequences
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要