Document-Level Relation Extraction with Structure Enhanced Transformer Encoder

2022 International Joint Conference on Neural Networks (IJCNN)(2022)

引用 0|浏览17
暂无评分
摘要
Document-level relation extraction aims at discovering relational facts among entity pairs in a document, which has attracted more and more attention in recent years. Most existing methods are mainly summarized as graph-based and transformer-based methods. However, previous transformer-based methods neglect structural information between entities, while graph-based methods are unable to extract structural information effectively on account that they isolate the encoding stage and structure reasoning stage. In this paper, we propose an effective structure enhanced transformer encoder model (SETE), integrating entity structural information into the transformer encoder. We first define a mention-level graph based on mention dependencies and convert it to a token-level graph. Then we design a dual self-attention mechanism, which enriches the structural and contextual information between entities to increase the vanilla transformer encoder inferential capability. Experiments on three public datasets show that the proposed SETE outperforms previous state-of-the-art methods and further analyses illustrate the interpretability of our model.
更多
查看译文
关键词
Relation Extraction,Transformer,Natural Language Inference,Graph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要