How to Encode Domain Information in Relation Classification
arxiv(2024)
摘要
Current language models require a lot of training data to obtain high
performance. For Relation Classification (RC), many datasets are
domain-specific, so combining datasets to obtain better performance is
non-trivial. We explore a multi-domain training setup for RC, and attempt to
improve performance by encoding domain information. Our proposed models improve
> 2 Macro-F1 against the baseline setup, and our analysis reveals that not all
the labels benefit the same: The classes which occupy a similar space across
domains (i.e., their interpretation is close across them, for example
"physical") benefit the least, while domain-dependent relations (e.g.,
"part-of”) improve the most when encoding domain information.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要