Inducing Political Bias Allows Language Models Anticipate Partisan Reactions to Controversies.
CoRR(2023)
摘要
Social media platforms are rife with politically charged discussions.
Therefore, accurately deciphering and predicting partisan biases using Large
Language Models (LLMs) is increasingly critical. In this study, we address the
challenge of understanding political bias in digitized discourse using LLMs.
While traditional approaches often rely on finetuning separate models for each
political faction, our work innovates by employing a singular,
instruction-tuned LLM to reflect a spectrum of political ideologies. We present
a comprehensive analytical framework, consisting of Partisan Bias Divergence
Assessment and Partisan Class Tendency Prediction, to evaluate the model's
alignment with real-world political ideologies in terms of stances, emotions,
and moral foundations. Our findings reveal the model's effectiveness in
capturing emotional and moral nuances, albeit with some challenges in stance
detection, highlighting the intricacies and potential for refinement in NLP
tools for politically sensitive contexts. This research contributes
significantly to the field by demonstrating the feasibility and importance of
nuanced political understanding in LLMs, particularly for applications
requiring acute awareness of political bias.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要