Gender De-Biasing in Speech Emotion Recognition

INTERSPEECH(2019)

引用 26|浏览17
暂无评分
摘要
Machine learning can unintentionally encode and amplify negative bias and stereotypes present in humans, be they conscious or unconscious. This has led to high-profile cases where machine learning systems have been found to exhibit bias towards gender, race, and ethnicity, among other demographic categories. Negative bias can be encoded in these algorithms based on: the representation of different population categories in the training data; bias arising from manual human labeling of these data; as well as modeling types and optimisation approaches. In this paper we assess the effect of gender bias in speech emotion recognition and find that emotional activation model accuracy is consistently lower for female compared to male audio samples. Further, we demonstrate that a fairer and more consistent model accuracy can be achieved by applying a simple de-biasing training technique.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要