TPKE-QA: A gapless few-shot extractive question answering approach via task-aware post-training and knowledge enhancement

Expert Systems with Applications(2024)

引用 0|浏览2
暂无评分
摘要
Few-shot extractive question answering (EQA) is a challenging task in natural language processing, whose current methods are mainly based on pretrained language models (PLMs). Data augmentation is often employed to improve the answer predictions of EQA models in few-shot settings. However, due to the differences between pretraining objectives and the EQA task, as well as embedding space alignment bottlenecks, the performance of few-shot EQA models must be improved. We propose TPKE-QA, a few-shot extractive Question Answering approach via Task-aware Post-training and Knowledge Enhancement, with entity-noun-oriented span selection in post-training, which can automatically generate EQA-style examples from a large-scale unlabeled corpus. By post-training based on generated examples, the gap between PLMs and the EQA task is effectively filled. To avoid embedding space alignment issues, a knowledge-enhanced sequence generation and knowledge injection approach for the EQA task enables gapless knowledge enhancement and fine-tuning on the post-trained model. In experiments, TPKE-QA achieved state-of-the-art results in most few-shot settings on the MRQA 2019 benchmark
更多
查看译文
关键词
Extractive question answering,Few-shot,Post-training,Knowledge enhancement,Task-aware,Pretrained Language Model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要