MBFair: a model-based verification methodology for detecting violations of individual fairness

Software and Systems Modeling(2024)

引用 0|浏览0
暂无评分
摘要
Decision-making systems are prone to discrimination against individuals with regard to protected characteristics such as gender and ethnicity. Detecting and explaining the discriminatory behavior of implemented software is difficult. To avoid the possibility of discrimination from the onset of software development, we propose a model-based methodology called MBFair that allows for verifying UML-based software designs with regard to individual fairness. The verification in MBFair is performed by generating temporal logic clauses, whose verification results enable reporting on the individual fairness of the targeted software. We study the applicability of MBFair using three case studies in real-world settings including a bank services system, a delivery system, and a loan system. We empirically evaluate the necessity of MBFair in a user study and compare it against a baseline scenario in which no modeling and tool support is offered. Our empirical evaluation indicates that analyzing the UML models manually produces unreliable results with a high chance of 46
更多
查看译文
关键词
Software fairness,Individual fairness,Model-based verification,UML
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要