Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

arxiv(2022)

引用 0|浏览8
暂无评分
摘要
In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for few-shot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving Augmentation (PPAug) to prevent overfitting. We increaese the few-shot 10% performance by approximately 4% on MultiWOZ 2.1 and enhances the slot-recall 8.34% for unseen values compared to baseline.
更多
查看译文
关键词
purpose preserving augmentation,self-training,few-shot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要