Back to Search Start Over

AI and the need for justification (to the patient)

Authors :
Muralidharan, Anantharaman
Savulescu, Julian
Schaefer, G. Owen
Source :
Ethics & Information Technology; Mar2024, Vol. 26 Issue 1, p1-12, 12p
Publication Year :
2024

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13881957
Volume :
26
Issue :
1
Database :
Complementary Index
Journal :
Ethics & Information Technology
Publication Type :
Academic Journal
Accession number :
175837808
Full Text :
https://doi.org/10.1007/s10676-024-09754-w