Simulating and Modeling the Risk of Conversational Search

ACM Transactions on Information Systems(2022)

引用 6|浏览77
暂无评分
摘要
AbstractIn conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions. In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.
更多
查看译文
关键词
Conversational search, risk control, reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要