Back to Search Start Over

The doctor will polygraph you now.

Authors :
Anibal J
Gunkel J
Awan S
Huth H
Nguyen H
Le T
Bélisle-Pipon JC
Boyer M
Hazen L
Bensoussan Y
Clifton D
Wood B
Source :
Npj health systems [Npj Health Syst] 2024; Vol. 1 (1), pp. 1. Date of Electronic Publication: 2024 Dec 05.
Publication Year :
2024

Abstract

Artificial intelligence (AI) methods have been proposed for the prediction of social behaviors that could be reasonably understood from patient-reported information. This raises novel ethical concerns about respect, privacy, and control over patient data. Ethical concerns surrounding clinical AI systems for social behavior verification can be divided into two main categories: (1) the potential for inaccuracies/biases within such systems, and (2) the impact on trust in patient-provider relationships with the introduction of automated AI systems for "fact-checking", particularly in cases where the data/models may contradict the patient. Additionally, this report simulated the misuse of a verification system using patient voice samples and identified a potential LLM bias against patient-reported information in favor of multi-dimensional data and the outputs of other AI methods (i.e., "AI self-trust"). Finally, recommendations were presented for mitigating the risk that AI verification methods will cause harm to patients or undermine the purpose of the healthcare system.<br />Competing Interests: Competing interestsThe authors declare no competing interests.<br /> (© The Author(s) 2024.)

Details

Language :
English
ISSN :
3005-1959
Volume :
1
Issue :
1
Database :
MEDLINE
Journal :
Npj health systems
Publication Type :
Academic Journal
Accession number :
39759269
Full Text :
https://doi.org/10.1038/s44401-024-00001-4