Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning.

pacific-asia conference on knowledge discovery and data mining(2019)

引用 0|浏览38
暂无评分
摘要
Distant supervision relation extraction is a promising approach to find new relation instances from large text corpora. Most previous works employ the top 1 strategy, i.e., predicting the relation of a sentence with the highest confidence score, which is not always the optimal solution. To improve distant supervision relation extraction, this work applies the best from top k strategy to explore the possibility of relations with lower confidence scores. We approach the best from top k strategy using a deep reinforcement learning framework, where the model learns to select the optimal relation among the top k candidates for better predictions. Specifically, we employ a deep Q-network, trained to optimize a reward function that reflects the extraction performance under distant supervision. The experiments on three public datasets - of news articles, Wikipedia and biomedical papers - demonstrate that the proposed strategy improves the performance of traditional state-of-the-art relation extractors significantly. We achieve an improvement of 5.13% in average F(_1)-score over four competitive baselines.
更多
查看译文
关键词
Distant supervision, Relation extraction, Deep reinforcement learning, Deep Q-networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要