FairLENS: Assessing Fairness in Law Enforcement Speech Recognition
CoRR(2024)
摘要
Automatic speech recognition (ASR) techniques have become powerful tools,
enhancing efficiency in law enforcement scenarios. To ensure fairness for
demographic groups in different acoustic environments, ASR engines must be
tested across a variety of speakers in realistic settings. However, describing
the fairness discrepancies between models with confidence remains a challenge.
Meanwhile, most public ASR datasets are insufficient to perform a satisfying
fairness evaluation. To address the limitations, we built FairLENS - a
systematic fairness evaluation framework. We propose a novel and adaptable
evaluation method to examine the fairness disparity between different models.
We also collected a fairness evaluation dataset covering multiple scenarios and
demographic dimensions. Leveraging this framework, we conducted fairness
assessments on 1 open-source and 11 commercially available state-of-the-art ASR
models. Our results reveal that certain models exhibit more biases than others,
serving as a fairness guideline for users to make informed choices when
selecting ASR models for a given real-world scenario. We further explored model
biases towards specific demographic groups and observed that shifts in the
acoustic domain can lead to the emergence of new biases.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要