Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare

Nature Machine Intelligence(2023)

Cited 0|Views8
No score
Abstract
Applications of machine learning are becoming increasingly common in medicine and healthcare, enabling more accurate predictive models. However, this often comes at the cost of interpretability, limiting the clinical impact of machine learning methods. To realize the potential of machine learning in healthcare, it is critical to understand such models from the perspective of multiple stakeholders and various angles, necessitating different types of explanation. In this Perspective, we explore five fundamentally different types of post-hoc machine learning interpretability. We highlight the different types of information that they provide, and describe when each can be useful. We examine the various stakeholders in healthcare, delving into their specific objectives, requirements and goals. We discuss how current notions of interpretability can help meet these and what is required for each stakeholder to make machine learning models clinically impactful. Finally, to facilitate adoption, we release an open-source interpretability library containing implementations of the different types of interpretability, including tools for visualizing and exploring the explanations.
More
Translated text
Key words
diverse interpretability requirements,multiple stakeholders,machine learning,healthcare
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined