Understanding and Addressing Gender Bias in Expert Finding Task
arxiv(2024)
摘要
The Expert Finding (EF) task is critical in community Question Answer (CQ A)
platforms, significantly enhancing user engagement by improving answer quality
and reducing response times. However, biases, especially gender biases, have
been identified in these platforms. This study investigates gender bias in
state-of-the-art EF models and explores methods to mitigate it. Utilizing a
comprehensive dataset from StackOverflow, the largest community in the
StackExchange network, we conduct extensive experiments to analyze how EF
models' candidate identification processes influence gender representation. Our
findings reveal that models relying on reputation metrics and activity levels
disproportionately favor male users, who are more active on the platform. This
bias results in the underrepresentation of female experts in the ranking
process. We propose adjustments to EF models that incorporate a more balanced
preprocessing strategy and leverage content-based and social network-based
information, with the aim to provide a fairer representation of genders among
identified experts. Our analysis shows that integrating these methods can
significantly enhance gender balance without compromising model accuracy. To
the best of our knowledge, this study is the first to focus on detecting and
mitigating gender bias in EF methods.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要