Accelerating Multiple Intent Detection and Slot Filling via Targeted Knowledge Distillation.

EMNLP 2023(2023)

引用 2|浏览24
暂无评分
摘要
Recent non-autoregressive Spoken Language Understanding (SLU) models attracts increasing attention owing to the high inference speed. However, most of them still (1) suffer from the multi-modality problem since the prior knowledge about the reference is relatively poor during inference; (2) fail to achieve a satisfactory inference speed limited by their complex frameworks. To tackle these problems, in this paper, we propose a $\textbf{T}$argeted $\textbf{K}$nowledge $\textbf{D}$istillation $\textbf{F}$ramework (TKDF), which applies knowledge distillation to improve the performance. Specifically, we first train an SLU model as a teacher model, which has higher accuracy while slower inference speed. Then we introduce an evaluator and utilize the curriculum learning strategy to select proper targets for the student model. Experiment results on two public multi-intent SLU datasets demonstrate that our method can realize a flexible trade-off between inference speed and accuracy, achieving comparable performance to the state-of-the-art models while speeding up by over 4.5 times.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要