Transparency and explainability of AI systems: From ethical guidelines to requirements

INFORMATION AND SOFTWARE TECHNOLOGY(2023)

引用 7|浏览2
暂无评分
摘要
Context and Motivation: Recent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of defining these quality requirements in practice.Objective: This study consisted of two phases. The first goal of our study was to explore what ethical guidelines organizations have defined for the development of transparent and explainable AI systems and then we inves-tigated how explainability requirements can be defined in practice.Methods: In the first phase, we analyzed the ethical guidelines in 16 organizations representing different in-dustries and public sector. Then, we conducted an empirical study to evaluate the results of the first phase with practitioners. Results: The analysis of the ethical guidelines revealed that the importance of transparency is highlighted by almost all of the organizations and explainability is considered as an integral part of transparency. To support the definition of explainability requirements, we propose a model of explainability components for identifying explainability needs and a template for representing explainability requirements. The paper also describes the lessons we learned from applying the model and the template in practice.Contribution: For researchers, this paper provides insights into what organizations consider important in the transparency and, in particular, explainability of AI systems. For practitioners, this study suggests a systematic and structured way to define explainability requirements of AI systems. Furthermore, the results emphasize a set of good practices that help to define the explainability of AI systems.
更多
查看译文
关键词
Transparency,Explainability,Ethical guidelines,Quality requirements,Explainability requirements,AI systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要