Empowering Calibrated (Dis-)Trust in Conversational Agents: A User Study on the Persuasive Power of Limitation Disclaimers vs. Authoritative Style.

ACM Conference on Human Factors in Computing Systems(2024)

引用 0|浏览2
暂无评分
摘要
While conversational agents based on Large Language Models (LLMs) can drive progress in many domains, they are prone to generating faulty information. To ensure an efficient, safe, and satisfactory user experience maximizing benefits of these systems, users must be empowered to judge the reliability of system outputs. In this, both disclaimers and agents’ communicative style are pivotal design instances. In an online study with 594 participants, we investigated how these affect users’ trust and a mock-up agent’s persuasiveness, based on an established framework from social psychology. While prior information on potential inaccuracies or faulty information did not affect trust, an authoritative communicative style elicited more trust. Also, a trusted agent was more persuasive resulting in more positive attitudes regarding the subject of the conversation. Results imply that disclaimers on agents’ limitations fail to effectively alter users’ trust but can be supported by appropriate communicative style during interaction.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要