Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries
CoRR(2024)
摘要
Deepfake technologies have become ubiquitous, "democratizing" the ability to
manipulate photos and videos. One popular use of deepfake technology is the
creation of sexually explicit content, which can then be posted and shared
widely on the internet. Drawing on a survey of over 16,000 respondents in 10
different countries, this article examines attitudes and behaviors related to
"deepfake pornography" as a specific form of non-consensual synthetic intimate
imagery (NSII). Our study found that deepfake pornography behaviors were
considered harmful by respondents, despite nascent societal awareness.
Regarding the prevalence of deepfake porn victimization and perpetration, 2.2
of all respondents indicated personal victimization, and 1.8
respondents indicated perpetration behaviors. Respondents from countries with
specific legislation still reported perpetration and victimization experiences,
suggesting NSII laws are inadequate to deter perpetration. Approaches to
prevent and reduce harms may include digital literacy education, as well as
enforced platform policies, practices, and tools which better detect, prevent,
and respond to NSII content.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要