A novel joint extraction model based on cross-attention mechanism and global pointer using context shield window

Computer Speech & Language(2024)

引用 0|浏览4
暂无评分
摘要
Relational triple extraction is a critical step in knowledge graph construction. Compared to pipeline-based extraction, joint extraction is gaining more attention because it can better utilize entity and relation information without causing error propagation issues. Yet, the challenge with joint extraction lies in handling overlapping triples. Existing approaches adopt sequential steps or multiple modules, which often accumulate errors and interfere with redundant data. In this study, we propose an innovative joint extraction model with cross-attention mechanism and global pointers with context shield window. Specifically, our methodology begins by inputting text data into a pre-trained RoBERTa model to generate word vector representations. Subsequently, these embeddings are passed through a modified cross-attention layer along with entity type embeddings to address missing entity type information. Next, we employ the global pointer to transform the extraction problem into a quintuple extraction problem, which skillfully solves the issue of overlapping triples. It is worth mentioning that we design a context shield window on the global pointer, which facilitates the identification of correct entities within a limited range during the entity extraction process. Finally, the capability of our model against malicious samples is improved by adding adversarial training during the training process. Demonstrating superiority over mainstream models, our approach achieves impressive results on three publicly available datasets.
更多
查看译文
关键词
Joint extraction,Global pointer,Cross-attention mechanism,Overlapping relation extraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要