Deciphering Deception: How Different Rhetoric of AI Language Impacts Users' Sense of Truth in LLMs

Dahey Yoo, Hyunmin Kang,Changhoon Oh

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION(2024)

Cited 0|Views4
No score
Abstract
Users are increasingly exposed to AI-generated language, presenting potential deception and communication risks. This study delved into the rhetorical aspect of AI-generated language influencing users' truth discernment. We conducted a user study comparing three levels of rhetorical presence and four persuasive rhetorical elements, using interviews to understand users' truth-detection methods. Results showed that outputs with fewer rhetorical elements posed challenges for users in distinguishing truth from false, while those with more rhetoric often misled users into false truths. Users' AI expectations influenced truth judgments, with responses meeting expectations perceived as more truthful. Casual, human-like responses were often deemed false, while technical, precise AI responses were preferred. This research emphasizes that rhetorical elements of AI language can significantly bias individuals regardless of a statement's actual truth. For enhanced transparency in human-AI communication, it is advisable for AI designs to thoughtfully integrate rhetorical elements and establish guiding principles aimed at minimizing the potential for deceptive responses.
More
Translated text
Key words
AI-generated language,human AI communication,deception detection,rhetoric,ALIED theory, language expectancy theory
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined