Artificial intelligence in clinical decision-making: Rethinking personal moral responsibility

BIOETHICS(2024)

Cited 0|Views4
No score
Abstract
Artificially intelligent systems (AISs) are being created by software developing companies (SDCs) to influence clinical decision-making. Historically, clinicians have led healthcare decision-making, and the introduction of AISs makes SDCs novel actors in the clinical decision-making space. Although these AISs are intended to influence a clinician's decision-making, SDCs have been clear that clinicians are in fact the final decision-makers in clinical care, and that AISs can only inform their decisions. As such, the default position is that clinicians should hold responsibility for the outcomes of the use of AISs. This is not the case when an AIS has influenced a clinician's judgement and their subsequent decision. In this paper, we argue that this is an imbalanced and unjust position, and that careful thought needs to go into how personal moral responsibility for the use of AISs in clinical decision-making should be attributed. This paper employs and examines the difference between prospective and retrospective responsibility and considers foreseeability as key in determining how personal moral responsibility can be justly attributed. This leads us to the view that moral responsibility for the outcomes of using AISs in healthcare ought to be shared by the clinical users and SDCs.
More
Translated text
Key words
artificial intelligence,bioethics,clinical decision-making,clinician,ethics,responsibility
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined