They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes

EUROPEAN JOURNAL OF COMMUNICATION(2024)

Cited 0|Views1
No score
Abstract
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes are perceived, and which potential motivations drive doubt in the believability of deepfakes versus authentic videos. To better understand the audience's perceptions of deepfakes, we ran an online experiment (N = 829) in which participants were randomly exposed to a politician's textual or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this politician's speech was forged to include a radical right-wing populist narrative. In response to both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such judgments. Key findings show that participants have a hard time distinguishing a deepfake from a related authentic video, and that the deepfake's content distance from reality is a more likely cause for doubt than perceived technological glitches. Together, we offer new insights into news users' abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy interventions promoting accurate verification skills among the audience.
More
Translated text
Key words
Deepfakes,experiment,deception detection,disinformation,media literacy,misinformation,verification
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined