Exploring Gender Biases in Language Patterns of Human-Conversational Agent Conversations
CoRR(2024)
摘要
With the rise of human-machine communication, machines are increasingly
designed with humanlike characteristics, such as gender, which can
inadvertently trigger cognitive biases. Many conversational agents (CAs), such
as voice assistants and chatbots, default to female personas, leading to
concerns about perpetuating gender stereotypes and inequality. Critiques have
emerged regarding the potential objectification of females and reinforcement of
gender stereotypes by these technologies. This research, situated in
conversational AI design, aims to delve deeper into the impacts of gender
biases in human-CA interactions. From a behavioral and communication research
standpoint, this program focuses not only on perceptions but also the
linguistic styles of users when interacting with CAs, as previous research has
rarely explored. It aims to understand how pre-existing gender biases might be
triggered by CAs' gender designs. It further investigates how CAs' gender
designs may reinforce gender biases and extend them to human-human
communication. The findings aim to inform ethical design of conversational
agents, addressing whether gender assignment in CAs is appropriate and how to
promote gender equality in design.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要