'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI
arxiv(2023)
摘要
This article explores how the 'rules in use' from Ostrom's Institutional
Analysis and Development Framework (IAD) can be developed as a context analysis
approach for AI. AI risk assessment frameworks increasingly highlight the need
to understand existing contexts. However, these approaches do not frequently
connect with established institutional analysis scholarship. We outline a novel
direction illustrated through a high-level example to understand how clinical
oversight is potentially impacted by AI. Much current thinking regarding
oversight for AI revolves around the idea of decision makers being in-the-loop
and, thus, having capacity to intervene to prevent harm. However, our analysis
finds that oversight is complex, frequently made by teams of professionals and
relies upon explanation to elicit information. Professional bodies and
liability also function as institutions of polycentric oversight. These are all
impacted by the challenge of oversight of AI systems. The approach outlined has
potential utility as a policy tool of context analysis aligned with the 'Govern
and Map' functions of the National Institute of Standards and Technology (NIST)
AI Risk Management Framework; however, further empirical research is needed.
Our analysis illustrates the benefit of existing institutional analysis
approaches in foregrounding team structures within oversight and, thus, in
conceptions of 'human in the loop'.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要