Quite Good, but Not Enough: Nationality Bias in Large Language Models – A Case Study of ChatGPT
International Conference on Computational Linguistics(2024)
摘要
While nationality is a pivotal demographic element that enhances the
performance of language models, it has received far less scrutiny regarding
inherent biases. This study investigates nationality bias in ChatGPT (GPT-3.5),
a large language model (LLM) designed for text generation. The research covers
195 countries, 4 temperature settings, and 3 distinct prompt types, generating
4,680 discourses about nationality descriptions in Chinese and English.
Automated metrics were used to analyze the nationality bias, and expert
annotators alongside ChatGPT itself evaluated the perceived bias. The results
show that ChatGPT's generated discourses are predominantly positive, especially
compared to its predecessor, GPT-2. However, when prompted with negative
inclinations, it occasionally produces negative content. Despite ChatGPT
considering its generated text as neutral, it shows consistent self-awareness
about nationality bias when subjected to the same pair-wise comparison
annotation framework used by human annotators. In conclusion, while ChatGPT's
generated texts seem friendly and positive, they reflect the inherent
nationality biases in the real world. This bias may vary across different
language versions of ChatGPT, indicating diverse cultural perspectives. The
study highlights the subtle and pervasive nature of biases within LLMs,
emphasizing the need for further scrutiny.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要