Rewarding What Matters: Step-by-Step Reinforcement Learning for Task-Oriented Dialogue
arxiv(2024)
Abstract
Reinforcement learning (RL) is a powerful approach to enhance task-oriented
dialogue (TOD) systems. However, existing RL methods tend to mainly focus on
generation tasks, such as dialogue policy learning (DPL) or response generation
(RG), while neglecting dialogue state tracking (DST) for understanding. This
narrow focus limits the systems to achieve globally optimal performance by
overlooking the interdependence between understanding and generation.
Additionally, RL methods face challenges with sparse and delayed rewards, which
complicates training and optimization. To address these issues, we extend RL
into both understanding and generation tasks by introducing step-by-step
rewards throughout the token generation. The understanding reward increases as
more slots are correctly filled in DST, while the generation reward grows with
the accurate inclusion of user requests. Our approach provides a balanced
optimization aligned with task completion. Experimental results demonstrate
that our approach effectively enhances the performance of TOD systems and
achieves new state-of-the-art results on three widely used datasets, including
MultiWOZ2.0, MultiWOZ2.1, and In-Car. Our approach also shows superior few-shot
ability in low-resource settings compared to current models.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined