Back to Search
Start Over
Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists
- Source :
- BMC Medical Ethics, Vol 25, Iss 1, Pp 1-10 (2024)
- Publication Year :
- 2024
- Publisher :
- BMC, 2024.
-
Abstract
- Abstract Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach. Results Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI’s ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist. Conclusions German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.
Details
- Language :
- English
- ISSN :
- 14726939
- Volume :
- 25
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- BMC Medical Ethics
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.03fcdbbb5d88406889bbe1c40eca7e5b
- Document Type :
- article
- Full Text :
- https://doi.org/10.1186/s12910-024-01079-z