AutoRE: Document-Level Relation Extraction with Large Language Models
arxiv(2024)
摘要
Large Language Models (LLMs) have demonstrated exceptional abilities in
comprehending and generating text, motivating numerous researchers to utilize
them for Information Extraction (IE) purposes, including Relation Extraction
(RE). Nonetheless, most existing methods are predominantly designed for
Sentence-level Relation Extraction (SentRE) tasks, which typically encompass a
restricted set of relations and triplet facts within a single sentence.
Furthermore, certain approaches resort to treating relations as candidate
choices integrated into prompt templates, leading to inefficient processing and
suboptimal performance when tackling Document-Level Relation Extraction (DocRE)
tasks, which entail handling multiple relations and triplet facts distributed
across a given document, posing distinct challenges. To overcome these
limitations, we introduce AutoRE, an end-to-end DocRE model that adopts a novel
RE extraction paradigm named RHF (Relation-Head-Facts). Unlike existing
approaches, AutoRE does not rely on the assumption of known relation options,
making it more reflective of real-world scenarios. Additionally, we have
developed an easily extensible RE framework using a Parameters Efficient Fine
Tuning (PEFT) algorithm (QLoRA). Our experiments on the RE-DocRED dataset
showcase AutoRE's best performance, achieving state-of-the-art results,
surpassing TAG by 10.03
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要