Instilling conscience about bias and fairness in automated decisions

Journal of Computing Sciences in Colleges(2022)

引用 0|浏览2
暂无评分
摘要
Automated decision-making that impacts human interests, rights, and lives, in particular different data mining and artificial intelligence-based techniques, have become an integral part of many high-stakes applications such as sentencing and bail decisions, credit approvals, hiring, and predictive policing. However, fairness concerns, such as discrimination based on race, age, sex, etc., primarily stemming from data and algorithmic bias, is one of the major and contemporary problems associated with automated decision-making. In a traditional Data Mining, Artificial Intelligence, or Machine Learning course, educators usually teach different automated decision-making techniques but largely with limited coverage on their ethical concerns such as fairness, transparency, and privacy. In this paper, we share our experience in building and incorporating a fairness module 1 in an existing undergraduate Data Mining course, within traditional content, and evaluate the outcome of the initiative. The module includes lectures and hands-on exercises, using state-of-the-art and open-source bias detection and mitigation software, on real-world datasets. The goal is to help instill the consciousness of fairness and bias at the very early stage of a potential future developer of automated decision-making software. The module is easily adaptable, and can be integrated into other relevant courses including introductory Artificial Intelligence and Machine Learning courses.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要